You can now upload CSV data and turn it into Salesforce records with just a couple of Flow elements. This action is based on original work by Narender Singh, plus nice new improvements by Hijlko and Suraj Pillai.
(Interested in exporting records to CSV? Check that out here)
(Need a version of CSV import that can work with larger files? Check out this screen component-based alternative)
Here’s all that’s needed to take a typical CSV file and turn it into a collection of Accounts that you can save or modify.
The work is done by the ConvertCSVToRecords invocable action. It takes as input one or more Content Document ID’s. This is the main way you refer to files that have been uploaded to Salesforce. In the example above, we use a File Upload screen component to upload a CSV file. (see Considerations: Uploading Files, below for more information). It outputs a Content Document ID which is then fed into the new action.
The action takes advantage of Flow support for generic SObjects in invocable actions. This is the same enhancement that enables Collection Actions. Here, at design time, I specify both the name of the object type “Account” and you select Account as the expect output from the action:
Considerations: Uploading Files
The video in this post shows an example that uses the out-of-the-box File Upload component. It requires a recordId to be provided, and it stores uploaded files ‘on’ that recordId. This makes a lot of sense when you’re trying to upload Account or Contact-related files, but doesn’t make a lot of sense here where we’re just trying to import data. As the video here shows, I actually created a separate object just to hold these values.The import process doesn’t actually make use of this related record. This is all a little annoying. Fortunately, since that video was recorded members of the Flow community have created a replacement extension called File Upload Improved, which doesn’t require a recordId. We recommend that for use with CSV Uploading.
CSV File Considerations
The first row of your file should consist of the field names you want to upload. Make sure you use apinames, include “__c” for your custom fields, and pay attention to the required fields for the records you’re uploading. Note: there should be no NULL values in the last column of CSV.
As of Version 1.2, the following things are supported:
Considerations: Large Data Sets
per research from contributor kurtfairfield:
if the file is too large, the parser hits heap size governor limits, likely in the recursive removeFSeps()
method call.
Work around is to break the large input file up. In an example case involving a Custom object with about 7x 80 char fields typically, we can only parse 2500 rows before the heap limit is hit. But with several separate paged files, the parser survives.
Row Data Support
- Text and Date fields
- Embedded commas
- Rows/File beginning and ending with quotes
- Newline characters within columns
- Null columns at the end of a row
- Supports the CSV spec definition of double quotes, which is represented by consecutive double quotes (
""
) within a column enclosed in double quotes:
Column Header Support
Column Headers should be in the first row of the file and should be api field names:
Column Headers that are not already field API names will be converted to API Name format
Examples:
Purchase Amount => Purchase_Amount__c
Nuts & Bolts => Nuts_Bolts__c
Support for Non-US Separators
Contributor Hijlko added this useful improvement. With it, you can import CSV files containing values that use non-comma separators like semicolons. Standard US csv uses a comma as a field separator while most European csv’s use a semicolon as a field seperator.The code can now import muli region CSV files
Numbers/currency like 1,2345.56 (US) and 1.234,56 (EU) are converted to 1234.56
You can now set the following:
- Field Separator
- Thousand Separator
- Decimal Separator
- Currency Symbol
If these inputs are left empty, the existing default values will be used.
Support for Non-US Dates
Date values in the formats YYYY-MM-DD and DD-MM-YYYY (or with non-hyphen separators like YYYY/MM/DD and DD/MM/YYYY) are now converted by default to YYYY-MM-DD, which is the Salesforce standard. You can set the Date Format input: The default is yyyy-MM-dd or the local date format. Other supported formats are dd-MM-yyyy and MM-dd-yyyy
Attributes
Inputs
Object API Name | Required. Tells Flow what to convert your data into. Example “Account”. Make sure your CSV file has all the required fields |
contentDocumentId | A single Content Document record ID string |
contentDocumentIdList | Collection of Strings. |
isTrim | Set isTrim to true to have any leading or trailing spaces removed. |
Date Format | Date format used in the file. Default is yyyy-MM-dd or the local date format. Other supported formats are dd-MM-yyyy and MM-dd-yyyy |
Separator 1: Fields | The character that indicates a separation between two fields. Defaults to a comma. European content often uses a semi-colon |
Separator 2: Decimals | The character that distinguishes between the left side and right side of a value, such as 23.45. Defaults to period. |
Separator 3: Thousands | The character that distinguishes between thousands, such as 230.454. Defaults to comma. |
Separator 4: Currency |
Output
A collection of Records
Install Package
Version 1.6 4/17/22 – Production Sandbox fix to handle multiple input files
Old Versions
Version 1.4.1 11/29 – bug fixes
- Handles negative numbers.
- Handles ‘1’ and ‘0’ for ‘true’ and ‘false’ respectively for boolean fields.
Version 1.4 Unlocked 11/2/21 Added Date Format input and fixed some bugs
1.3.2 Unlocked 10/2/21
Version 1.3 Unlocked 9/3/21 Support for Non-US Separators,Support for Non-US Dates ,Additional Robustness in Handling Non-Alphabetical Characters
- Version 1.01 – 6/13/20 -removed export files, as they have been obsoleted by this work.
- Install Unmanaged v1.0 4/19/20
View Source
This Post was most recently updated on: 12/29/22