Skip to main content


You can configure the Data Importer by passing in configuration props. An overview of the available props are:

fileUpload: {...fileUploadConfig},
schemaMapping: {...schemaMappingConfig}
webhookUrl: "some.web.hook.url",
returnWhenComplete: false
color: "#f1a400",
shouldInheritFont: false
🚀 Custom Data Importer Button Text

Required Props

pipelineIdThe id of the Pipeline that will be used to run the job/upload. Check out the Pipeline Documentation for more info.

Optional Props



fileUpload: {},
schemaMapping: {}

steps specifies which steps that should be displayed on the modal, as well as the specific configurations for each step. Learn more about available step configurations.


true | {}

Specify whether to run the job at the end of the modal. Pass in false to not run the job.

You can also pass in a optional configuration object:

webhookUrl: "some.web.hook.url",
returnWhenComplete: true,
callbacks: Callbacks
  • webhookUrl - receives the final data as a payload once the job has run
Webhook Data Payload Example

"dataTable": [
["Id", "Name", "Date"],
[1, "Jane", "21-12-2002"],
[2, "Joe", "22-12-2002"],

  • returnWhenComplete - a boolean that specifies whether the data importer modal should finish/close after the job has been fully processed. By default, the modal will process the file asynchronously and return immediately after the request has been received.
  • callbacks - check out the callbacks section for more info


color: "#f1a400",
shouldInheritFont: false
Customize your data importer to fit your product.
  • color- pass in a hex color code to specify the accent color to be used throughout the modal.
  • shouldInheritFont - if true, the modal will inherit the parent font-family. Simply wrap the rendered the Data Importer with a component styled with your desired font.



The text to be displayed on the Data Importer Button. Simply pass in the text as a child of DataImporter.


The Data Importer will guide users uploading their files through a series of steps. By default, the two steps are:

StepStep KeyFunctionality of Step
UploadfileUploadRequired - Users can upload a file or connect their Google Sheet, and see a preview of the data. If the uploaded file is an Excel Workbook with multiple sheets, also allows user to select
MappingschemaMappingUsers can map/rename their input columns to match the Pipeline's schema. Segna will take the best guess for the mapping, and prompt the user to confirm or edit.

Each step has its own configuration object, which can be used to change what functionality is shown on the step. As the Data Importer is built entirely on top of Segna APIs, some configuration for these API are also exposed out in the step configuration when applicable.


You can configure which steps are shown or not shown using the steps prop. Passing in the step's key (such as schemaMapping) will include the step in the modal, and removing it will skip it.


Each step's configuration object, as well as the runJobOnCompletion object, can be passed in a callbacks key:

callbacks: {
beforeCallback: ({ jobId }) => { someAction() }
afterCallback: ({ jobId }) => { someAction() }

beforeCallback and afterCallback trigger before and after 'Next' or 'Finish' is pressed on a modal step respectively. Each callback is passed in the jobId for your use.


fileUpload: {
jobName: "some job name",
destinationFileName: "folder/filename",
webhookUrl: "some.web.hook.url",
useFullData: false,
scripts: [
outputFileType: "csv",
callbacks: Callbacks

The parameters are a subset of startJob API call.

jobName"jobId"Name that appears on your Segna Platform dashboard
destinationFileName"{pipelineId}-{jobId}"Name of cleaned file. Available only if a data bucket is your output destination.
webhookUrlWebhook URL that gets sent metadata extracted from the file uploaded. Example of the payload can be found here.
useFullDatafalseWhether to use the full dataset when going through the data importer flow. You should only specify true if you need to query metadata and require it to be representative of the entire file, however the upload process can be significantly slower with bigger files.
The full file will always be processed regardless when outputting to your destination source.
scripts[]Array of scriptIds of scripts you have added via the Segna Platform. When the file is uploaded, the scripts will run in the order provided. The input of the script is Python dataframe, of the data after some preliminary cleaning, but before the field remapping happens. Check out scripts for more details.
outputFileType"csv"File type of cleaned file. Options include csv, excel. Available only if a data bucket is your output destination.
callbacks{}See Callbacks


The schemaMapping page allows a user to confirm whether Segna has correctly mapped the input columns of the uploaded data to match the output columns of the desired schema on the Pipeline. Most of the time, Segna will guess most of the mapping, and only require the user to action low-confidence mappings.

schemaMapping: {
allowColumnCombining: false,
callbacks: Callbacks,
allowColumnCombiningfalseAllow users to combine multiple input columns into one field of the output schema. Users can also edit what separator to use when combining.
callbacks{}See Callbacks