Skip to main content Skip to footer

Cloud API Performance Challenges

Accessing cloud data has constraints that cannot be easily overcome if you need data in real time. Cloud systems are undoubtedly superior for a number of documented reasons but working with remote web servers that handle multiple tenants introduces delays and restrictions. 

Large data centers typically cost between $200 to $500 million dollars so you are lucky if you have one close to you. More likely it will be across several states or even countries. The tyranny of distance impacts the time between requesting information (say, clicking a link on a web page, or calling an API method) and receiving a response. This end-to-end response time is comprised of component delays that impact both outbound and inbound traffic:  

  • Endpoint processing time is the delay at both ends of the circuit. The turnaround delays on the cloud server are largely unpredictable as the workload is impacted by other tenants sharing the server, node, or cluster, as well as maintenance activities going on within the cloud data center, which may not be visible
  • The nodal delay is the combined time to process the data on each hop the signal takes. Routers at each hop are used to switch and regenerate the signal for transmission over long distances. The routers have inherent delays as they queue and process huge volumes of information coming from multiple sources. Routers detect and correct transmission errors and handle congestion, all challenges that consume time.
  • Propagation delays relate to the signal in motion. The velocity of terrestrial-based carrier systems is around 2/3 the speed of light. This equates to 200,000 km/s, or the speed required to go around the world once in 200 milliseconds – fast, but given the signal has to travel to and from your cloud server, it still adds to the delay. If your signal has a ground-to-satellite hop, this can be 20 times slower than terrestrial broadband circuits. A round trip to a satellite can be 80,000 kilometers
  • Bandwidth is how much data can be simultaneously pushed through the communication pipe - the material that carries the signal. The pipe can be twisted copper wire, coaxial cable, optical fiber, or electromagnetic transmissions, the latter being used for microwave links and communication with satellites. Data entering a busy pipe (hop) can be queued (delayed) until the pipe has availability. Like a chain, the total end-to-end throughput is only as fast as the weakest link, and that is probably the pipe in and out of your building

Don’t underestimate the endpoint processing time at both ends of the circuit! At FuseIT we switched to Amazon Web Services because they were quick to roll out faster-performing SSD hard drives which substantially reduced the delays in getting data out of the cloud using an API. Other than swapping or upgrading your service, if your business uses a cloud-based web application (SaaS or software-as-a-service) like Salesforce, you have very limited ability to improve performance. Like everyone else, you share a common cloud interface that is optimized for generic use. Your best hope is to use a faster browser and/or the device on which your browser is installed.

For customers using cloud API services, where data is transacted via back-end web services, additional constraints are often applied by the cloud provider. These purposefully limit the flow of data to ensure servers do not get swamped with low-value requests. Salesforce, for example, counts the number of times each customer calls the API. If the permitted count (related to the number of licenses) is exceeded, the API will close for 24 hours. This quickly teaches customers to follow best practices when dealing with the API and, in many cases, store and concatenate multiple calls into a single call. Additional constraints include limiting the number of concurrent connections from a single user as well as the size of data packets entering and exiting the cloud. Fortunately, there are workarounds to mitigate the API constraints but they often introduce new challenges.

Third-party applications that access cloud data typically use APIs or web services. There are two perspectives to transacting data between two systems. We can see these in the example below, using our Sitecore and Salesforce connector (S4S), where (i) from Sitecore, we push and pull data to/from the Salesforce API, and (ii) from Salesforce, we use callouts to push/pull data to/from Sitecore. We can move information from system A to system B in two ways, by pushing or pulling data.

In discussing cloud API performance, let's only consider data being pushed and pulled to the cloud from a third party. In the example above, this equates to Sitecore transacting data through a Salesforce API. Common examples of this are when third-party users:

  • Authenticate into their system using credentials stored in the cloud
  • Need to see data stored in the cloud
  • Need to push data to the cloud

Cloud data, with the advantages of universal access, is usually the single source of truth, or at least, the master data repository. When choosing a cloud solution you need to optimize access performance based on the following parameters: (1) how quickly data is needed in the third-party system (2) how often the data changes (3) how stale the data can be allowed to get, and (4) in which system is the updated data required. The solution architectural options are:


Real-Time Transactions

This is when data is requested from the cloud in real time, initiated by the third-party application. Real-time is best for cloud data that is largely dynamic, like frequently changing customer information.

Requesting cloud data exposes the third-party user to end-to-end response delays. When a user is waiting to log in to the third-party application, how long is acceptable? A well-established study suggests 1 second is about the limit. You will need to mitigate this delay by using techniques like a keep-alive process to ensure the end-to-end session is always ready to accept traffic, and where it does not need to be re-established after a timeout.

Data requests have the potential to exceed the cloud API count. Imagine a nationwide TV campaign attracting prospects to register interest on your company website. If registrations are pushed to a cloud CRM, will the cloud API deal with this many individual calls? If not, consider using another type of transaction (see below).


Server Cached Transactions

This is when data is requested from the cloud and then typically stored in server memory cache so subsequent requests, within a specified time period (say 10 seconds), use the cached data. This reduces the number of API calls and virtually eliminates communication delays. The cloud remains the single source of data.

Server Cached Transactions are ideal for cloud data that is largely static, like product information that changes infrequently.

  • Cloud-based user records are not ideal for caching (due to the number of records) so this option is not effective in an authentication scenario
  • Cloud updates, like password changes, are not immediately reflected in the third-party application. A separate approach may need to be considered when timely access to cloud data is required
  • This method has the potential to exceed the cloud API count although the count will be lower than with real-time transactions


Synched Transactions

This old-school approach is where selected cloud data is localized in a data store. All requests from the third-party application to the cloud are redirected to use the data store instead. A separate sync service updates the local data store with any changes in the cloud (since the last sync). Any changes to the data in the store made by the third-party application are also detected by the service and pushed to the cloud at the same time. There is often an option that permits the third-party application to push data to both the local data store and the cloud at the same time.

Synched Transactions are ideal for cloud data that is static or dynamic

  • Cloud-based user records can all be localized so login is instantaneous
  • Cloud updates, like password changes, are pushed to the data store, so the new values can be used immediately
  • The sync process batches transactions to the cloud to minimize the number of API calls
  • The cloud API count can be further controlled via the sync frequency
  • Changes made directly in the cloud system will not be available to the third-party application until the data store is synced
  • The local data store can be an offline source of data to other systems, or even a data management utility if the cloud server becomes unavailable

Conclusion

In selecting a cloud system, we’ve looked at how the distance to your cloud service impacts the data access performance. The turnaround response from the service also plays a key role - and all systems are not created equal. Other factors, like API count and concurrency constraints, must also be considered. These variables can be beyond our control so we need to look at improving the performance with local optimization strategies. The particular approach will depend on the volume of data, how up-to-date you need it to be (at both endpoints), and how often it changes.

 

FuseIT specializes in CRM integration. Our enterprise Send2CRMS4S, and CDP4S, connectors enable the real-time exchange of data from website technologies to Salesforce. Please contact us for more information or to see a demo of these in action.

 

Sign up for our newsletter to receive our articles directly in your mailbox.

About the author

Terry Humphris

Website integration with dual-mode personalization

The ultimate way to support your sales team.

How we use cookies

Notification

This website uses cookies to ensure you get the best experience on our website. Learn more