Understanding Salesforce Connect
Synopsis
Salesforce Connect is an add-on product for the Salesforce platform that provides seamless data integration, allowing users in core CRM to view, search, and modify data stored outside your Salesforce org—without importing it or ETL. You can surface data you own and store in other sources without copying it into Salesforce, but you can still view and edit that data directly within the Lightning UI, Flow, Apex, and more.
Note: Salesforce Connect requires a separate license; licensing is discussed below.
The remainder of this guide goes into additional detail on the different aspects of this feature set and deepen your understanding. Think of it like a director’s commentary on a movie; not strictly required, but available for those interested in a deep dive.
Common Use Case
It’s useful to ground this discussion in a real-world example. ERP integration is one very common use of Salesforce Connect, enabling sellers and customer service reps to access information often not stored in Salesforce as the system of record. Order details, shipments, returns, invoices, and payments all fit into this category.
Customer service agents in particular may access this information hundreds of times a day, and Salesforce Connect avoids the need for them to “flip tabs” or manually cross-reference information from other apps. This saves time and reduces errors in busy call centers, keeping productivity high and improving customer service.

Orders from an external system surfaced via Salesforce Connect
This can be extended to Experience Cloud, allowing customers a robust means to view details about their bills, orders, and shipments in the same interface they use to search the knowledge base and submit requests for support. Developers can use this data in all sorts of creative ways, since it can be accessed in Apex with the same syntax as any other object.
Key Benefits of Salesforce Connect vs. ETL
Though ETL is a widely-deployed pattern for system integrations, it has notable downsides. Salesforce Connect offers multiple advantages:
- Real-Time Access to Changes: Unlike ETL (Extract, Transform, Load) processes that involve periodic data synchronization, Salesforce Connect provides real-time access to external data, ensuring users always have the most current information.
- Minimized Storage and Data Movement: With Salesforce Connect, there’s no need to store copies of external data within Salesforce, which reduces storage costs and minimizes data redundancy. Security and compliance are easier to manage, since you avoid copying data between systems.
- Simplified Maintenance: Managing data mappings and transformations within an ETL process can be complex and time-consuming, and failed ETL sync jobs are a hassle to clean up. Salesforce Connect simplifies this by offering point-and-click setup and straightforward ongoing maintenance.
Origin and Evolution
It’s useful to understand the landscape in which this product was introduced, comparing and contrasting with today’s environment to reflect on the evolution and forward trajectory of Salesforce Connect.
Note: this guide makes forward-looking statements. Please make your purchase decisions based on the products and features generally available at the time of purchase. Refer to our safe harbor statement for more on this topic.
ERP and On Premise
When this capability first became available in 2014, the focus and the task at hand for most customers was either ERP integration or integration to RDBMSs like Oracle and SQL Server—likely deployed on premise. Cloud adoption looked very different 10 years ago than it does now.
Salesforce Connect launched with a reliance on the OData protocol, the 2.0 version of which was finalized only shortly before the initial release of the product. This was a good fit for the task, since major ERP vendors supported OData via add-on tools and Microsoft helped their customers solve for the “last mile” to expose their data sets via OData for HTTP access.
In either case, customers were responsible for standing up some intermediary component to provide API access to the back-end data store. IT teams need to engage to deploy and manage these solutions. This is still a viable pattern, with MuleSoft or third-party tools like DataDirect enabling connectivity to a wide variety of data sources.
Moving Beyond OData to the Cloud
Cloud adoption has accelerated greatly since the introduction of Salesforce Connect. Infrastructure spending ballooned more than 5,000%, from around $12 billion in 2010 to an expected $623 billion by 2025 (sources here and here). Studies by Deloitte and Accenture show that more data deployments are planned for the cloud than other forms of hosting, and firms that are further along in their cloud adoption report higher levels of success in terms of their target outcomes.
When a data set is in the cloud, network access via HTTP tends to be available. In theory, this addresses the “last mile” challenge mentioned earlier. REST APIs provide access to small-to-medium scale transactional data sets, and other tools exist to handle high-volume ingestion and egress. And though enterprise architectures are as heterogenous as ever, a relatively short list of major cloud vendors have emerged as leaders in the “lakehouse” space. This opens the door for Salesforce Connect to target a small number of known endpoints and address a large portion of real-world customer use cases. As of this writing, Salesforce Connect can access structured data on AWS (S3, DynamoDB, and RDS) as well as Snowflake.
For use cases that focus more on front-end applications, GraphQL has emerged as a popular approach to aggregating disparate transactional data stores behind a single API endpoint. The Salesforce Connect team has made initial investments in this area as well, and we have ideas for how to expand what’s possible.
If the data isn’t hosted in a major public cloud, and isn’t accessible via a standard like OData or GraphQL, Salesforce Connect offers an option to leverage any REST API using Apex code. This code option can be thought of as a substitute for using an additional translation component in the architecture (e.g. MuleSoft), though those components are still a wise investment for teams that want to minimize code maintenance.
All told, Salesforce Connect is well-positioned to continue providing value for customers in the era of Big Data and AI. The bulk of this guide covers how each option works, as well as key concepts to bear in mind to make the wisest choices when crafting your architecture. This document also includes information on how Salesforce Connect compares to or compliments other offerings from Salesforce.
How It Works
Salesforce Connect achieves its results via real-time callouts from your org to an HTTP endpoint exposing the records in question from a data store customers own or manage. When a user in Salesforce takes action—like loading a List View or invoking Flow that accesses certain records—Salesforce Connect translates that action into a query, sends the query to the corresponding external data source as an HTTP request, and feeds the results back into the Salesforce platform. The originating action completes, and (in most cases) the fact that the data is external comes with no disadvantage.

Salesforce Connect translates a SOQL statement to a format that works for a given external data source, and performs the query (or DML) via an HTTP request to an API endpoint.
The use of real-time callouts is sometimes referred to as data virtualization or “zero-copy ETL.” Terms aside, the key point is that Salesforce Connect does not “sync” your data, make a copy of the record, or store it in our logs. This is a notable advantage when working with sensitive customer data.
Salesforce Connect exposes data from external systems as an External Object, which you can treat like a Custom Object in features such as:
- Tabs, List Views, and Related Lists in the user interface
- Reports and Dashboards
- Flow and other automation tools
- Query and DML operations in Apex
- Mobile apps and Experience Cloud
- …and more.
Many of the features on this list (like the Related List shown above) are enriched by the ability to create relationships from External Objects to both standard and Custom Objects. External Objects can look “up” to standard or Custom Objects, and vice versa. You can even leverage Indirect Relationships to align data based on a shared value rather than the Salesforce ID, which you may not want to store in the external system.
Metadata Configuration
Each External Object is linked to an External Data Source, which holds the reference to the external system along with the parameters needed to understand its API and responses. Typically, an individual External Object is aligned to a remote table in that external database, and each field on the object is aligned to a column in that table. And just like with Custom Objects, the External Object and its fields can be labeled and modeled in a way that’s optimized for human readability. Here’s an example:
- The
ORDER_MGMTdatabase/schema in the external system is represented as an External Data Source called Order Management. - The
ORDERStable in that database is represented as an External Object called Order. - The
O_ORDERSTATUScolumn in that table is represented as a Picklist field on the External Object called Order Status with values like Open, Fulfilled, and Canceled.
Validate and Sync
In the vast majority of cases, the work required to configure of External Objects is reduced significantly by Salesforce Connect’s ability to read the metadata of the target External Data Source. The Validate & Sync feature set helps create the External Objects in Salesforce with fields of the appropriate matching types.
In particular, the Sync feature/button imports the available metadata from the external system into Salesforce, parses it, and presents the administrator with a list of Objects it can create on the user’s behalf. This feature also has intelligence to handle External Objects that have already been configured in Salesforce, but may have changed in the external system (e.g. a new column has been added).
Note: The sync feature does not sync any data—only metadata. The term “sync” is meant to imply that the data model in Salesforce is aligned with the tables and columns in the source system.
Available Adapters and System Compatibility
Our Focus: Adapters and Standards
Salesforce Connect does not have a vast library of connectors like MuleSoft or Data Cloud, and it’s not likely that it ever will. Instead, Salesforce Connect includes adapters aligned with open standards like OData, GraphQL, or SQL. The target system needs to adhere to some standard format to be compatible with Salesforce Connect in an out-of-box manner.
For the (many) systems that expose their data via an arbitrary REST API and do not follow a standard, we provide a Custom Adapter Framework for customers or partners to write their own adapter in Apex. This delivers robust results as long as the API in question supports a few key behaviors.
If the target system has no accessible REST API, it’s best to deploy a MuleSoft-based solution to access the data where it’s stored and translate it to a format Salesforce Connect can interpret. MuleSoft can solve for the “last mile” in a number of scenarios, including on-premise hosting, JDBC/ODBC, legacy systems, and more. The combination of MuleSoft and Salesforce Connect is a very popular deployment pattern, and helps illustrate that the two are not competing offerings. (See below for more on the comparison between our offerings.)
To recap, the target system either needs to expose the data in a well-known format, or some additional component needs to reformat it so that Salesforce Connect can leverage it. The notable exception to this rule is Amazon DynamoDB, for which there is a distinct adapter resulting from Salesforce’s global partnership with AWS. (The next section includes more information on this.)
Adapters and Usage Patterns
What follows is a list of the adapters supported by Salesforce Connect, along with a separate section for each to call out additional relevant details and highlight when they’re typically deployed.
- OData: Salesforce Connect supports various versions of the Open Data Protocol.
- OData 4.01: This is the newest version; use it whenever possible.
- OData 4.0: Similar to OData 4.01, but subject to callout limits. Avoid this choice.
- OData 2.0: This version of the standard is more than a decade old; avoid this choice whenever possible.
- SQL: As of this writing, Salesforce Connect is compatible with the SQL APIs of two major data lake vendors:
- Snowflake has a SQL API we access as a Custom Client.
- Amazon Athena provides SQL querying via a REST API for structured data in Amazon S3 buckets.
- GraphQL: GraphQL is a highly flexible approach to data APIs that offers interesting efficiencies, but is less rigorous than OData re: how it dictates the interactions between the client and service. This creates trade-offs to manage.
- GraphQL is also used to access data in Amazon RDS. (Refer to the section below for more.)
- Amazon DynamoDB: high-scale NoSQL store that handles RDBMS-like use cases via PartiQL. Consider this option if you have millions or billions of records to store in a given table.
- Custom Adapter: Customers or partners can use the Custom Adapter Framework to write their own adapter in Apex for any external system that does not fit into one of the options above.
- Cross-Org Adapter: Enables a basic level of integration with another Salesforce org, with notable limitations. This is option is included here for completeness, but is not expected to receive large new features.
Read on to learn more about what’s unique for each protocol or adapter.
OData
OData is often used with ERP and order fulfillment systems, since Microsoft and SAP are co-sponsors of the OData standard and support OData natively in their offerings. It’s a robust standard with a broad scope that can, in theory, support nearly any style of client application in an integration use case.
If you’ve never encountered OData, you can think of it as a way to send any SQL statement over a URL. A query or DML statement gets translated into a (long) URL with various parameters, and the service is expected to parse that HTTP request and execute it against the back-end data store. This approach can support nearly any application that works with relational data, but requires the service to parse and interpret a very wide variety of requests. (Imagine writing a generic SQL parser.) Most service owners don’t write this complex code by hand, and offload the heavy lifting either to tools from major vendors like Microsoft and SAP, or battle-tested libraries like Apache Olingo and PyOData. If those options aren’t readily available, MuleSoft can translate nearly any structured data into the OData format.

Salesforce Connect calls out to an OData translation layer, often powered by MuleSoft.
Since OData dovetails with relational databases so cleanly, it’s a great fit for Salesforce Connect’s need to weave external data into the relational paradigm used by the core Salesforce platform. Our customers can query or update any data, leverage flexible filtering options, and quickly configure the data source via the metadata exposed by an OData service.
- OData 4.01 is the most up-to-date, and should be used whenever possible. One key advantage of the 4.01 adapter is the lack of limits re: the number of callouts; this adapter leverages Named Credentials for HTTP callouts, which can handle very heavy workloads.
- The only other additions to 4.01 as compared the older 4.0 adapter are batch processing for DML and minimal metadata. Customers need to use the 4.01 adapter to leverage these new capabilities, though workloads running on the 4.0 adapter today will still work on the newer 4.01 adapter. We recommended upgrading as soon as resource availability allows.
- OData 4.0: This adapter was introduced a few years after the OData 4.0 standard was finalized, which represented a significant modernization of OData services. XML was replaced by JSON for data and metadata, URLs were changed to make queries more powerful, batch support was added, extensibility was improved, and more.
- The 4.01 Adapter should be used to take advantage of these enhancements; don’t deploy a new configuration with the 4.0 adapter. Existing installations will be supported for the foreseeable future.
- OData 2.0: This is the oldest version of OData we support, and the standard is over a decade old. Don’t deploy a new configuration with the 2.0 adapter.
- Customers who have deployed OData 2.0 services should begin thinking about how to transition to a the latest version of the standard; Microsoft and SAP both have options.
- You can also consider moving the data store to a hyperscale public cloud like AWS to take advantage of modern technologies, pay-as-you-go pricing, and high availability. It may be more advantageous on a net basis to move the data into a modern lakehouse instead of rewriting the code for an OData service.
SQL
SQL and RDBMSs represent a wildly popular approach to working with structured data. Moreover, relational data integrates into the core Salesforce platform very smoothly, since a relational database technolgies underpin SObjects.
The challenge in working with these data stores is not SQL, but transport and network access. Salesforce Connect needs an API accessible via stateless HTTP, and most RDBMSs require persistent JBDC/ODBC connections. Hence, the current iteration of the SQL Adapter is optimized for certain strategic Salesforce partners that host the data/service and provide HTTP access to their API. (If you need access to an RDBMS hosted elsewhere, use MuleSoft along with Salesforce Connect.)
- Snowflake provides customers with a flexible, easy-to-use data platform that provides SQL access to nearly any data set. The Snowflake compatibility in Salesforce Connect leverages this SQL capability via an API to act as a system of engagement on top of this data, and join it to records stored natively in CRM to provide a full 360º view of the customer.
- Salesforce Connect can leverage both standard and dynamic tables, which simplify configurations that would otherwise include managing streams.
- As of this writing, Salesforce Connect does not support Snowflake views, but those views can be “wrapped” in a dynamic table.
- Snowflake offers a variety of authentication options, though as of this writing the only one that has been tested by our team is the OAuth Custom Client. Our documentation discusses how to configure this capability, including authentication setup.
- Note, however, that full coverage of Snowflake’s integration and access control capabilities is beyond the scope of our help articles. Make sure the Role used for authentication has sufficient access to the target table(s), schema, and warehouse.
- Additional discussion of this capability is available on our Architect Blog, as well as Snowflake’s Builders blog.
- Salesforce Connect can leverage both standard and dynamic tables, which simplify configurations that would otherwise include managing streams.
- Amazon Athena is a managed service on AWS that can execute SQL queries against data stored in flat files (e.g. CSV, JSON, Parquet) persisted in S3 buckets. This provides durable, low-cost storage for very large data sets. Similar to Snowflake, Athena’s UI is SQL-oriented and appeals to highly technical personnel, making Salesforce useful as a system of engagement for this data.
- Common configurations of Athena include a pipeline that populates the data in the S3 buckets in a very deliberate manner to optimize efficiency and reduce query costs. Due to this careful configuration of the underlying S3 resources, the Athena compatibility in Salesforce Connect does not support inserting, updating or deleting data.
- Our documentation discusses the configuration of this feature set. Note that AWS has its own authentication protocol.
- Note that both Athena and S3 need to be configured, managed, and maintained separately. This includes not only the IAM security configuration, but also the extent to which files on S3 are partitioned and indexed to optimize query performance and manage costs. Additionally, both those services have their own billing and cost implications.
- A full discussion of Athena and S3 administration is beyond the scope of our documentation materials. Refer to this example as a starting point for how to configure minimal access to AWS for Salesforce Connect’s purposes.
GraphQL
GraphQL is an increasingly popular data integration option that provides some advantages over vanilla REST. It can be an attractive choice when aggregating data from multiple discrete back-end data stores. Additionally, it provides client applications with more flexibility in terms of what data is returned from the source system—especially for data with relationships. This provides notable advantages when writing UI code.
This approach to relationships is less useful in the context of Salesforce Connect, since the Salesforce platform primarily leverages data relationships modeled via a relational database paradigm. Most times, multiple smaller queries are executed against the external system, and the Salesforce platform stitches together the results for the benefit of the user or workload in Salesforce.
In general, Salesforce Connect’s approach to GraphQL is heavily influenced by the Salesforce platform’s relational database roots. Our customers can sort, filter, and aggregate data on nearly any field. GraphQL APIs typically don’t provide that same level of robustness; it’s more common to see that records can only be filtered or sorted by a few key fields. Broadly speaking, GraphQL does not specify a particularly strict contract between a client and the service, causing both sides to make assumptions as to how to “meet in the middle.” Popular tools like Apollo, Relay, and Hasura each need to insert opinions on fundamental behaviors like pagination and state management.
At present, Salesforce Connect addresses this ambiguity by setting a comparatively high bar for the service to meet, which enables the richest experience for customers accustomed to the full power of the Salesforce platform. We’ve partnered with Amazon to provide a sample solution for Amazon RDS and AWS AppSync that customers can either use directly or treat as a reference implementation and emulate in their own server stack.

Salesforce Connect connects to Amazon RDS by using AWS AppSync as a middleman to handle the GraphQL exposure.
Note: AWS AppSync is a very robust GraphQL service that provides a convenient front-end to many data stores, similar to how their API Gateway offering simplifies the task of managing disparate back-end APIs. It’s worth your attention if you’re interested in this problem space and have invested in AWS.
In the future, we hope to open up the GraphQL Adapter to make it easier to work with existing GraphQL services as they are, rather than requiring them to change to meet Salesforce’s requirements. This moves the burden to the Salesforce side of the integration, and likely will involve Apex code to cope with the divergence in server-side behaviors across different implementations.
Amazon DynamoDB
Amazon DynamoDB is a NoSQL data store that provides customers a database-like solution for heavy B2C-scale data workloads involving millions or billions of records in a single table. It provides a unique architecture that keeps performance consistently high for common operations like query and insert, even when table sizes reach a level that would be problematic for most databases.
This power comes with notable trade-offs. Customers can’t sort, filter, or aggregate the data set in a highly flexible manner like they would in an RDBMS, or join tables together to grow the data model in an arbitrary direction as business requirements change. Instead, customers must understand their query and access patterns up front—before the first version of their application is deployed—and reverse-engineer an appropriate data model based on how future queries will function. Moreover, all records for a given application are typically stored in a single, shared table. Amazon’s documentation covers this topic, which is sometimes referred to as “access-first design.”
Due to the limitations in querying, Reports & Dashboards are not supported in Salesforce Connect for External Data Sources of this type. The reporting capability built into CRM allows users to sort, filter, and group by nearly any field—and DynamoDB does not support this. Broadly speaking, data is exported out of DynamoDB for analysis. (Tableau does not connect directly to DynamoDB either.)
Though these trade-offs may seem significant, they are worth it if your workload is very heavy and your data volume is high. There are a limited number of ways to address the challenges inherent to managing billions of records with fast throughput and high reliability. The main e-commerce experience at Amazon.com is built on DynamoDB, which serves over 100M requests per second on Prime Day while maintaining single-digit millisecond response times.
In the context of Salesforce, this can be valuable for high-volume record keeping e.g. audit or history tables. These data sets typically have clear access patterns—enabling effective access-first design—and customers can both store and query an effectively unlimited amount of data in an efficient manner.
Our documentation highlights an order management example, though the concepts have been applied to history tables, maintenance records, payment history, and more. Review these articles to get a sense for how the columns are populated in a counterintuitive manner in an access-first, single-table design.
Finally, it’s worth recognizing that since DynamoDB is a NoSQL data store, customers often store JSON in the individual attributes (columns) and can add a new attribute (column) to a table at any time. This provides flexibility, but prevents Salesforce Connect from programmatically syncing the metadata to assist administrators when configuring the External Objects and their fields. To address this challenge, Salesforce Connect includes a customized setup wizard that reads an initial subset of data in a given table and suggests a configuration that acts as a workable starting point.
Custom Adapter Framework a.k.a. Apex Adapter
If none of the above options will work to access the target data store, it’s often possible to write Apex code to access the system’s REST API. Indeed, REST APIs are wildly popular, with developers reporting they’re three to four times as likely to use REST compared to the usual alternatives (sources here and here). As of this writing, it’s Salesforce Connect’s second most popular option.
The coding involved takes some effort, but is not overly complicated. Check out this Getting Started guide, familiarize yourself with these key concepts, refer to these examples, and you’ll likely succeed.
Customers often report they want to integrate to a certain target system that has a given API, but the API can’t change in any way. It may be controlled by another team, or a third-party vendor. Writing a custom adapter shifts the burden to the Salesforce “client” to address the service as it is.
This will provide a robust solution as long as the REST API has a few key capabilities, like querying with either client-side or server-side pagination, along with some filtering and sorting options.
- Pagination: Records returned from the external system need to be returned in “pages” or “chunks.”
- The Apex client can either request a limited number of records, or the server can decide on the page size and return a subset of the total results.
- If the server controls the pages, note that the pages may not be of equal size.
- Modern systems like Snowflake tend to manage pagination automatically on the server side; it’s usually wisest to use that option when it’s available.
- Either way, this gets aligned to the
queryMorecapability in Salesforce. - It’s critical that the external system support some form of pagination. Basic features in the Salesforce UI like List Views won’t work without it.
- The Apex client can either request a limited number of records, or the server can decide on the page size and return a subset of the total results.
- Filtering: Only records that match certain criteria are returned, for example Orders created in the last month for a particular customer Account.
- It’s likely the external system doesn’t allow records to be filtered by every field, but this workable as long as there’s enough control to navigate the data set.
- Example: for CRM use cases, it’s important to filter the total list of Orders by customer Account, so an agent can see what a given customer bought. It’s less important to filter the orders by, say, Subtotal (e.g. “show me only Orders with a Subtotal of exactly $50.”)
- Data analysis use cases may involve such investigations, but that’s outside the norm for sellers and customer service agents.
- Example: for CRM use cases, it’s important to filter the total list of Orders by customer Account, so an agent can see what a given customer bought. It’s less important to filter the orders by, say, Subtotal (e.g. “show me only Orders with a Subtotal of exactly $50.”)
- It’s likely the external system doesn’t allow records to be filtered by every field, but this workable as long as there’s enough control to navigate the data set.
- Sorting: Arrange the records returned in a given order, for example most recent appearing first.
- Some sorting capability is important for human readability of the data set. Customer service agents probably want to see the Orders for a customer sorted by criteria like “most recent on top” or “most expensive to least expensive.”
- That said, as with filtering, it’s usually not critical for every field to be sortable. As long as Orders can be sorted by Created Date and Amount, that may be enough for your customer-facing teams to work with the information. Arranging the Orders by Subtotal is likely to be of limited value.
Other capabilities are secondary in importance:
- Aggregation is a nice-to-have, supporting SOQL syntax like
SELECT COUNT(ExternalId) FROM My_Object__x. If the external system support aggregation in its query syntax, these SOQL statements can be processed very efficiently.- Note, however, that server-side aggregation is not leveraged by the Reports & Dashboards built into CRM. So instead of a
COUNT()query, a Report in CRM will load all the records matching a certain criteria, then count them.
- Note, however, that server-side aggregation is not leveraged by the Reports & Dashboards built into CRM. So instead of a
- DML operations like
insertandupdatewill allow Salesforce users to edit data in the remote system, though the Apex code may make callouts to other API methods to invoke compound operations in the external system likerequestRefund(orderId).
If you’re exploring the option of writing your own adapter in Apex, you may find it helpful to experiment with this freely-available API that supports robust querying with filtering, sorting, and pagination.
Cross-Org Adapter
The Cross-Org Adapter allows customers to read or write data in a specific target org from another “calling” org. It creates a point-to-point connection that can surface objects in the target org as External Objects in the calling org. This can create positive outcomes for customers when many subsidiary or “child” orgs all reference a single master data set from one “parent” org.
Outside of that scenario, there are many limitations. Since External Objects are treated like Custom Objects, the special features on standard objects like Accounts, Contacts, Opportunities, Leads, and Cases aren’t available. Files and Activities aren’t available either. The list of limitations includes, but is not limited to:
- Uploading, downloading, previewing, and managing Files
- Reporting across orgs that aggregates similar Objects
- Special handling of Activities, Events, and Tasks
- Out-of-box tools for tracking Field History
- Lead conversion workflows
- Opportunity pipeline workflows
- Case closure workflows
- Special handling for content in the the Knowledge object
- Special handling for objects in Industries apps such as Financial Services Cloud and Health Cloud
Some customers desire an outcome where a User in Org A can leverage all the data, configurations, and apps in Org B as if they were actually a User in Org B. This is out of scope for the Cross-Org Adapter, and there are no plans on the roadmap to target that concept. Users are fundamentally bound to the tenant in which they reside, and do not cut across multiple tenants/orgs. Changing that would have major implications.
Customers seeking to integrate multiple orgs together should explore Data Cloud; the harmonization and replication features in particular speak to common multi-org scenarios. If that isn’t a fit, we recommend you explore the MuleSoft product suite to address other integration needs.
Note: Queries made by Salesforce Connect from a calling org to a target org count against the API limits of the target org.
Comparisons to Other Salesforce Products
Architects planning integrations know they need to carefully evaluate their options and select the one that best fits the job at hand. This section takes a closer look at how Salesforce Connect compares to other tools provided by Salesforce that interact with external data.
MuleSoft
MuleSoft Anypoint and Salesforce Connect are not direct “apples-to-apples” substitutes. In fact, they’re usually better together.
MuleSoft Anypoint is a wide-ranging integration toolset that can solve nearly any integration challenge, allowing unified access to systems that have clean APIs—as well as those that don’t. Very often, the two work together to “meet in the middle,” exposing data from anywhere via OData. And though MuleSoft is very powerful, Salesforce Connect is still needed to access data from external systems without copying it into the target org.
So to reiterate, MuleSoft alone (without Salesforce Connect) can only copy data into the target org. This has storage and security implications, and the data is no longer real-time. On the other hand, Salesforce Connect alone (without MuleSoft) can only access systems that have a “clean,” robust API with HTTP access. The APIs that best fit that criteria come from major vendors like AWS, Snowflake, or Microsoft.
Data Cloud
BYOL Data Sharing in Data Cloud resembles the functionality of Salesforce Connect, but harmonizes data from disparate sources and unlocks next-generation use cases around omni-channel marketing and AI. There’s no equivalent for those in Salesforce Connect. Data Cloud also has a wide array of connectors, as well as other features like replicating data across different orgs used by different departments.
That said, if your primary goal is empowering customer service agents or sellers with external data in CRM, Salesforce Connect enables that with a simpler architecture. And if you want users in CRM to create or edit external data (without writing code), you need to use Salesforce Connect.
Tableau and CRM Analytics
Tableau and CRM Analytics provide best-in-class visualization and reporting for data residing in many external data sources. Those primarily focused on visualization and data analysis will be better served to stick with those solutions rather than using Salesforce Connect to feed external data into CRM reports and dashboards. Built-in reporting is not as feature-rich, and is best for operational reports displaying a few thousand rows of data (or less).
To learn more about your data integration options, check out our Data Integration Decision Guide.
Considerations, Limits, and Troubleshooting
Key Considerations
Data surfaced through Salesforce Connect as External Objects offers most—but not all—of the features available with Custom Objects. Here are the most common areas that trip customers up:
- Record Sharing: the vast majority of external data stores do not have anything equivalent to Salesforce’s Owner field, and all the sharing rules built on top of that. Typically, Salesforce authenticates to the remote system with a defined “role” such that all users in Salesforce of a certain type (e.g. Sales or Service) will see the same records.
- A notable exception to this is data sources hosted by major vendors like AWS or Microsoft. Cloud offerings from those vendors come with robust Identity & Access Management tools, so data can be secured via RBAC, ABAC, and more. You can usually leverage this in a successful configuration with a careful use of Named Credentials, authenticating to their APIs in the correct way.
- Another exception would be any system that supports Per User authentication, most commonly enabled via OAuth. Here again we see that judicious use of Named Credentials can solve for AuthZ in external systems.
- Triggers and automation: In the vast majority of cases, Salesforce is not notified when external data changes. The most robust way to solve for this is via an event-driven architecture, which (like robust access controls) is typically available if the data resides with major vendor platform like AWS. Use the eventing tool of your choice to send an event to Salesforce when data changes in an external system.
- Formulas and Roll-Up Summaries are not available on External Objects. Those features are powered by the Salesforce platform’s control of the underlying data store.
- There’s a limited exception to this in the Amazon DynamoDB Adapter. Formulas can be used to create “virtual columns” that don’t actually exist in the remote table. This can solve for simple scenarios e.g. City = “Chicago” and State = “Illinois” combined into a Location field with the value of “Chicago, Illinois”.
- This formula approach may be extended into the other adapters in the future. Apex code can also achieve the same results when using the Custom Adapter Framework.
- There’s a limited exception to this in the Amazon DynamoDB Adapter. Formulas can be used to create “virtual columns” that don’t actually exist in the remote table. This can solve for simple scenarios e.g. City = “Chicago” and State = “Illinois” combined into a Location field with the value of “Chicago, Illinois”.
- Reporting in core CRM will not load high-volumes of external records into a single Report just because the external database contains millions (or billions) of records. Existing limits still apply; expect reports to be limited to a few thousand records.
- The Reporting module in core CRM makes a larger number of comparatively small queries, then stitches them together to create the results of the report.
- Tools like Tableau and CRM Analytics are usually needed when working with millions of records in an analytical/reporting context. Most times, data needs to be extracted out of the source system for reporting on large data sets to be efficient. (The data gets stored in a different, optimized manner.)
Analogy: Salesforce Connect as Plumbing
A broad rule of thumb that’s useful when evaluating Salesforce Connect is to think of it like plumbing that brings data into your org instead of water. The data comes in through one central point and is distributed to different rooms of the house, each of which may have appliances or fixtures that use it.
If something seems to be wrong with the dishwasher or sink, you should troubleshoot by carefully inspecting that particular fixture before looking at the plumbing. There could be an issue with the pipes, but if that’s the case, you’ll likely see problems elsewhere in the house.
Additionally, the various capabilities of our platform are like appliance or fixtures that each use water in a slightly different way. Customers typically ask about:
- User interface
- Lightning
- Classic
- Console
- Mobile
- Analytics
- Reports & Dashboards
- CRM Analytics
- Tableau
- Automation
- Flow
- OmniStudio
- Development tools
- Apex
- APIs
- LWC

Other platform features using data from Salesforce Connect
Each of these features are like appliances built by different teams at different times. None of them use External Objects in precisely the same manner; External Objects come from a data framework that each feature team implements against in their own way. Hence, questions such as “how does Console work with External Objects?” need to be directed to the team who can speak to how that feature is built.
Unfortunately, deep inspection of the pipes doesn’t tell you much about how the sink works.
Limits
As of this writing, there are two notable areas in which customers may encounter “hard limits” in Salesforce Connect’s processing. This section of the documentation contains a more complete list everything that could be construed as a limitation, though only these two are commonly encountered by customers and require additional commentary.
- Callouts: Salesforce Connect calls out to the remote system to fetch external data in response to some user action in the platform. The action could be as simple as rendering a List View of the top 20 records, or it could be a complex piece of Apex invoked by Flow.
- Due to legacy technical concerns in the OData 2.0 and 4.0 adapters, there is a limited number of callouts per hour that will be made to obtain/manipulate remote data. Those adapters make up to 20,000 callouts per hour by default, across all users in an org.
- The other adapters effectively do not have a callout limit. Again, refer to the official documentation for full details.
- Customers facing this limit should upgrade to the 4.01 Adapter, which does not have the callout limit. Earlier sections of this guide cover on that topic.
- Contact our support team if you’re using OData 2.0 and upgrading to 4.x is in your plans but is still in the future.
- Due to legacy technical concerns in the OData 2.0 and 4.0 adapters, there is a limited number of callouts per hour that will be made to obtain/manipulate remote data. Those adapters make up to 20,000 callouts per hour by default, across all users in an org.
- ID Mapping: The Salesforce platform needs an ID for each record in the database for many (if not most) of the capabilities to function. List Views, Record detail pages, Reports, and more all need this to perform their operations. To apply these capabilities to external data, Salesforce Connect creates a Salesforce ID for each remote record it encounters, and maps it to the External ID that can be used to query that record from the remote system.
- This mapping is temporarily stored in a table we manage internally, and there is limit to the rate at which mappings can be inserted into this table: 100,000 mappings per hour can be created.
- The mapping is created the first time Salesforce Connect encounters the record in question; it happens automatically the first time we query the record.
- Just like Salesforce’s 15/18 character IDs, the External ID is arbitrary and treated as a String.
- Contact our support team if you’re planning something like a “one-time data load” and you’re concerned your users might touch more than 100,000 different records in a single hour.
- There are plans in place to raise this limit significantly in the future; refer to our safe harbor statement.
- This mapping is temporarily stored in a table we manage internally, and there is limit to the rate at which mappings can be inserted into this table: 100,000 mappings per hour can be created.
Troubleshooting
Salesforce Connect is highly reliable, and its straightforward architecture minimizes problems at run time. When customers encounter trouble, it’s typically at setup time and almost always relates to authentication.
If you’re having trouble authenticating to the target system, you’ll need to engage the team that maintains or supports that endpoint to understand the auth options. That might be the support team from a major vendor like AWS or Snowflake, or it could be the developers that built the custom server-side back-end.
It’s best to troubleshoot by simplifying the number of “moving parts,” so one good strategy is to take Salesforce Connect out of the equation and use curl or Postman to reach the endpoint and see if you can get a successful response. Newer adapters use Named Credentials for callouts and authentication, so you may want to refer to the Troubleshooting section of this guide. Named Credentials can also be used with the older adapters using this strategy.
Once you can authenticate, the Validate & Sync capability handles the heavy lifting in the setup process. The notable exception to this is the Amazon DynamoDB Adapter, which has its own configuration wizard to aid setup as much as possible in the absence of metadata describing the external table. If your DynamoDB table has a complex model, you’ll want to carefully review our documentation on qualifiers and consider modeling your solution on this example.
If you’re having trouble writing the Apex code for the Custom Adapter, refer to the section above on that adapter for links to the documentation and samples.
Licensing
Salesforce Connect is not free, and comes with an add-on cost based on the number of connections. Customers need one license for each endpoint from which Salesforce Connect accesses data. The endpoint can, in theory, aggregate data from multiple back-end data stores—and that has no impact on licensing. In the case of AWS or Snowflake, a single endpoint can surface a vast amount of data.

One license of Salesforce Connect allows for one External Data Source, defined by one endpoint. This can aggregate data from multiple back-end data stores.
Licensing for the Cross-Org Adapter
The exception to this is the Cross-Org Adapter, which allows a given Salesforce org to access up to five other orgs. Note, however, that if Org A needs to read data from Org B—and vice-versa—then each of those orgs needs a license. That requires (at least) two licenses to be purchased, with one placed in each org.

Licensing for the Cross-Org Adapter

One license of Salesforce Connect enables connections to both one external endpoint and five other Salesforce orgs.
One nice bonus is that a single license of Salesforce Connect allows an org to connect to one external endpoint, and five other Salesforce orgs.
To get an accurate estimate of the licensing costs, it’s best to consult with your Salesforce account executive, who can provide tailored pricing based on your organization’s needs.
If you’re still evaluating Salesforce Connect, you can sidestep licensing concerns and try it out in a Developer Edition or Trailhead org for free. Both contain the licenses you need to try it for free.
Links and Resources
- Starting point in Help and Training
- Quick Start on Trailhead: try it now!
- Learn about our Amazon DynamoDB compatibility on Trailhead
- Dive deeper in the Salesforce Developer podcast and on YouTube
- Use the Custom Adapter Framework to write your own adapter in Apex and understand these key concepts
- Limits and Considerations for Salesforce Connect
- Read our Data Integration Decision Guide to understand our data integration options
Conclusion
Salesforce Connect offers a powerful and efficient way to integrate external data into your Salesforce environment. With its real-time data access, reduced data redundancy, and simplified maintenance, it presents a compelling alternative to traditional ETL processes. By leveraging out-of-the-box adapters and following best practices for troubleshooting, you can maximize the benefits of Salesforce Connect.



