Hevo Features
Hevo delivers a user-friendly and reliable data integration platform for organizations with growing data needs. You can use Hevo to automate the process of collecting data from over 100 applications and databases, loading it to a data warehouse, and making it analytics-ready. This allows your analysts to set up faster analysis and reporting.
Some of the key features of Hevo are discussed here.
Multiple Workspaces within a Domain
A feature that allows customers to create multiple workspaces with the same domain name.
For organizations that signed up before Release 2.00: If the domain name is already registered with Hevo, customers can create a new workspace or join an existing one, while creating the account. In addition, for each Hevo region, customers can create up to five teams and respective workspaces.
For customers signing up after Release 2.00: Customers can create one workspace to explore Hevo without the hassle of maintaining multiple teams, workspaces, and pricing plans. Customers can switch the region at any time directly from the Hevo UI and create Pipelines in the region of your choice.
Multi-region Support
For customers signing up after Release 2.0, Hevo provides support for maintaining a single account across all Hevo regions, with a maximum of five workspaces. As a part of account setup, Hevo creates the first workspace, with a 30-day cool-off period. Once the cool off period is over, customers can create the next workspace. For each workspace, Hevo automatically selects the nearest region by default on the basis of the customer’s IP address. Customers can switch the region at any time directly from the Hevo UI and create a Pipeline in the region of their choice.
Each workspace has its own pricing plan, billing, and payment details that apply to all the regions associated with it. The consumed Events and any On-Demand Credits used by a workspace are also billed collectively for all Pipelines created across all regions.
ELT Pipelines with In-flight Data Formatting Capability
Hevo’s no-code ELT (Extract-Load-Transform) solution, Pipelines is a cloud data integration tool that can fetch data from your different data Sources, such as SaaS applications, databases, and file storage-based ones, and load it to a database or data warehouse. ELT has emerged as a preferred technique for setting up data Pipelines over the traditional ETL process, where data loading was slow as complex Transformations had to be done within the Pipeline. Using Hevo’s ELT data pipelines, your data teams can load high volumes of data easily and quickly and deliver access to fresh and integrated data to analysts.
As the ELT technique loads raw data to your Destination, the data may not be as per the database or data warehouse table format because different data Sources may store data in different formats. Analysts must run additional computations post loading the data to make data consistent and prepare it for analysis. At Hevo, we believe it is a better practice to format and clean the data for the warehouse before loading it. The Python code-based and Drag-and-Drop Transformations in Hevo allow you to cleanse and prepare the data. Once the data is in the Destination, you can transform it further for analysis by configuring dbt™ Models, creating SQL Models, and or combining them both in Workflows.
Draft Pipelines
In Hevo, you can use Draft Pipelines to iterate on Pipelines. Whenever you start to create a Pipeline but exit the Hevo UI half-way through, Hevo saves that Pipeline in Draft status. You can resume from where you left-off and complete the configuration of the saved Pipeline.
Historical Data Sync
Historical data is all the data available in your Source at the time of creation of the Pipeline. Hevo fetches your historical data using the Recent Data First approach, getting you the latest Events first.
For database Sources, Hevo fetches all the data available in the selected database(s) and objects as historical data. It uses the primary keys defined in the Source objects to load this data. If primary keys are not present, you can specify a timestamp or incrementing column. However, uniqueness of the data may not be ensured in that case.
For SaaS Sources, Hevo uses a default historical sync duration. You can change it to fetch just the amount of data you need. You can also restart the historical load for one or multiple objects if ever the ingested data gets lost either before or after being loaded to the Destination.
While the aim remains to get your data for the longest duration possible, the historical sync duration may often be decided by the API limits imposed by the Source.
Hevo loads your historical data for free, which means that these Events are not counted as billable Events even if you restart the historical load for the Pipeline or specific objects at any time.
Flexible Data Replication Options
Hevo offers flexible data replication options to sync data between your Sources and Destinations. You can replicate entire databases, specific tables, or even individual columns, allowing you to focus on only the relevant data.
You can also customize the type of data you want to load. For example, you can choose to load the historical data or just the new and updated records. The Full Mode replicates all the data from the Source to the Destination. On the other hand, the Incremental mode captures the changes that have occurred since the last replication.
You can schedule the replication process to match the business requirements. The data is protected through secure connections and encryption protocols. Even after creating the Pipeline, you can modify these settings to accommodate the changing requirements.
Sync from One or Multiple Databases
If your data is available across multiple databases in your Source, you can configure your Pipeline to load data from one or more of these databases.
Data Deduplication
Hevo deduplicates the data you load to a database Destination based on the primary keys defined in the Destination tables. If primary keys are not defined, data is directly appended. Where data warehouses have primary keys defined but are not enforceable, Hevo circumvents this challenge and ensures that only unique records are uploaded to the Destination.
Skip and Include Objects
Hevo provides you object-level control on the data you ingest from SaaS-based and database Sources. All the objects that you do not select to ingest appear as SKIPPED in the Pipeline Objects list. You can include these later, if needed. Similarly, you can skip objects you previously included. You can also skip just the historical data load for an object (if you had chosen to load historical data while creating the Pipeline), while loading all the new and incremental data. When you include an object, Hevo immediately queues it for ingestion, with historical data being ingested first. Read Including Skipped Objects Post-Pipeline Creation to know how you can include and ingest the skipped objects (if any).
Load New Tables with the Same Pipeline
The Include New Tables feature allows you to automatically ingest from any new table created in the Source or any deleted table that is re-created post-Pipeline creation. You can keep this option disabled when you create the Pipeline, but cannot modify this setting later on.
Smart Assist
Hevo Smart Assist is the prompt, preemptive, and smart assistance built into the product that provides you complete visibility and control over your data while helping you to minimize costs. Along with this, Hevo provides you alerts about your Pipeline, Activation status, data ingestion, or any activity that requires your attention, through Email or third party applications such as Opsgenie or PagerDuty. You can also use the 24/7 Live Chat support and connect to our support team and get your queries resolved. Read Getting Alerts in Third-Party Applications to know how to enable these integrations.
On-Demand Credit
Hevo enables you to maintain an On-Demand Credit to continue loading data without interruption even when your Events quota is exhausted. When the Events in your base plan are consumed, the On-Demand credit is used to get your Events so that your Pipelines are not paused.
You can set the On-Demand Credit limit up to a maximum of 60% of your subscribed plan’s Events.
On-Demand Usage
Options such as On-Demand Credit help you handle any overages so that your data loads without any interruption even when the quota assigned in your plan is consumed. Hevo supports your Pipelines for an additional 24 Hours if you have exhausted all quotas and up to the next working day if this happens over a weekend, so that your business continuity is maintained while you take due action.
Usage-based Pricing
Hevo offers a variety of plans to suit requirements of different scales. You can take a Monthly or Annual subscription. Further, you can choose the Events quota in your base plan and meet your requirements through On-Demand Credit or by upgrading your plan, as you see fit. Hevo also offers you a few Sources as free under its Free plan that comes with a limited Events quota. Any Events you load with these Sources are free as long as the limit is not exhausted. Any overages are billed to you.
Observability and Monitoring
Hevo offers you various graphs, counts and UI indications that provide visibility into the various aspects of the data replication, including:
-
Latency and speed of data ingestion and loading
-
Billable and historical usage details through graphs and counts
-
Success and failures at each replication stage
-
Event failures and resolution assistance
-
Filtered views
-
Imminent Events quota exhaustion and available actions
You can use the Pipeline Jobs view to understand and follow the complete movement of Events from ingestion to loading for each object selected for replication. Each job represents a Pipeline run and the set of objects selected for ingestion. This feature is currently in Early Access.
Recoverability
Hevo lends you full support to recover from any issues at the Source end and keeps retrying the data ingestion. For log-based Pipelines, Hevo restarts the historical load to read any Events that were not ingested during the downtime from the logs. Similarly, if a Destination reports a problem, Hevo retries the data load to ensure no records are lost. Hevo Support also monitors Hevo’s performance to catch any rare issue at our end.