Offline data synchronization

With the ever-growing complexity of modern mobile applications, the connection to powerful servers is crucial. Cognitive elements, speech and image recognition, chatbots, and geodata information for maps are virtually impossible to use without online connections to servers with large data storage and high computational power capabilities. This makes offline data synchronization a critical feature.

For most people, reliable network coverage is almost always available, which is convenient because many apps rely on constant online access. However, many businesses need their employees to do their work even if the online connection is only available some of the time or not at all. For example, emergency work needs to be done in the case of an electricity blackout or other disaster scenarios when the mobile network might be down or blocked due to heavy use. Another example is when workers are at remote locations like underground construction sites or offshore wind parks without network coverage. In these cases, working with offline data and intelligent data synchronization mechanisms is necessary and is often the most challenging aspect of building mobile solutions for such business environments.

There are many techniques available for data synchronization for mobile solutions. However, it’s crucial to select the right technique for your special business needs. In many projects, clients usually expect the ideal solution: all data is always available offline, never outdated, and synchronized smoothly in the background for every possible app function. Unfortunately, this usually conflicts with technological limitations, such as the sheer mass of data that needs to be synchronized or limitations in computational power to efficiently process complex data synchronization logic.

To have a successful offline data synchronization strategy, you need to investigate and understand your specific business requirements in order to choose the right techniques. There is no one-size-fits-all solution. The following sections explore typical scenarios and recommendations for how to handle them.

Basic strategies

First, let’s go over some basic strategies to handle offline data synchronization, including:

  • Mass of data
  • Sync cycles and prioritization
  • Delta sync: Preprocessed versus on-demand

Mass of data

A major core problem for offline use of apps is the sheer amount of data that is required. Workers might need geodata for large regions, thousands of work orders, or complex assets consisting of hundreds of parts. Obviously, data reduction is key to this, and we will delve into this shortly. However, it’s often unavoidable that gigabytes of data need to be stored for offline use.

In such cases, it needs to be investigated when and how often the users have access to a high-speed data connection and which slots in their work schedules are appropriate for downloading the data. The better the connection, the shorter these slots can be.

In one scenario, workers needed to be offline the entire day. They gathered in their offices in the morning to plan and coordinate their work and put on their workwear. So, we decided to use this time slot of approximately 15 minutes to update the data on the device. A high-speed WLAN connection has been established at the office so that up to one gigabyte of data can be downloaded at this time. In other scenarios, data updates happen overnight at the users’ homes through their private internet connection.

To resolve the issue of handling large amounts of data, you not only need to look into performance optimization and data reduction but also the business processes and workplace configuration. You need to consider how to best integrate the time slot to download the amount of data needed by the users in their daily or weekly work schedules. Additionally, at key spots, you might need to make a high-performance network connection available or established to provide the necessary bandwidth for the downloads.


Sync cycles and prioritization

A crucial topic to investigate is how often offline data is updated and how updates are prioritized. Ideally, all data is always up-to-date. However, in real life, this is rarely guaranteed. Different chunks of data need to be prioritized based on business needs. Some of the data might only be updated once a year, while other data requires updates several times a day, so you can define different synchronization cycles for different chunks of data based on business rules. The smaller the packages of data that require high-frequency updates, the better.

For the client, a decision about which data requires only low-frequency updates might be difficult. A helpful tool is the use of forced ranking for the different chunks of data. As this forces the business owner to make a ranking, it significantly helps open the discussion and find compromises between business needs and the technical feasibility of data synchronization.


Delta sync: Preprocessed versus on-demand

Next, take a look at the back end. The data that will be synchronized between the mobile device and back-end systems usually needs to be prepared, especially if only a delta between the current data on the mobile device and the server is provided — rather than the full data. Calculating such a delta can be a highly complex task for the server. Processing time can be significantly higher than the time required to exchange the data. This becomes even worse if a large number of clients request individual deltas at the same time from the server. Although on-demand and individual delta calculations might look like elegant solutions at first glance, their applications can often be limited in practice.

A better approach can be preprocessing deltas by the server that are shared with all clients. For example, the server could preprocess a delta for a chunk of data overnight based on the updates of the past working day. In the morning, every client could request a copy of this delta file in order to get updated with the past day’s changes. Again, this requires a deep and thorough understanding of the client’s business needs and limitations to define an appropriate preprocessing strategy.

Another concept to consider besides preprocessing is replication. This comes into play when access and delta processing capabilities of the back-end systems are limited. In such cases, the data can be replicated to a dedicated data store in the mobile middleware. So, the data is prepared or preprocessed for the mobile client without disturbing the core back-end systems. Though this approach provides more flexibility and independence for the mobile application, it comes with new challenges inherited from data replication.

Advanced strategies

Now that we’ve introduced basic strategies to handle offline data synchronization, let’s investigate some more advanced strategies that cover topics such as modifying and updating offline data as well as handling shared data. These strategies include:

  • Modification of offline data
  • Shared data sync
  • Auto-sync versus manual sync
  • Push versus pull

Modification of offline data

Using mobile devices offline not only requires getting data on the device, it also requires handling data modifications offline. Of course, it’s best if data modification actions are not allowed or limited when the user is offline. This is feasible for some business scenarios. In case of an emergency situation such as a blackout, the main requirement for users is to have all relevant data offline available to fix the blackout. There is no immediate need to modify any of the data. However, a different situation would be asset maintenance. Users would certainly like to complete and document their maintenance tasks for the asset while offline.

By their core nature, offline data modifications are commonly only for documentation and not collaboration with other workers. It’s sufficient to cache this data and play it back to the server the next time the user has an online connection. Most conveniently for the user, this can happen seamlessly in the background. However, you should consider that the user would need to have some control or need to be informed if and when the data has been played back to the server.


Shared data sync

Modification of offline data requires caching data until network connectivity is available again. For data that is exclusive for a specific user, such as work orders or individual assignments, modifications are easy to handle. The real challenge starts with shared data. If one dedicated user can modify the data, many users should be able to as well. Examples include work orders assigned to a group, a pool of work orders with user self-assignment, or many users working on the same assets and needing to change status information.

Providing offline functionality for such scenarios bears a lot of risks. By the very nature of this problem, conflicts can’t be avoided. If users are offline, the modified data simply can’t be made available to other users. So, different users can work on and modify the same data without knowing what other users do with it at the same time. If such scenarios are unavoidable and there is a strong business need to let users work offline on shared data, you need to set a strong focus on exception handling and defining business rules on how to handle those exceptions. A viable approach for many scenarios is that the first update of a data set wins, and any other updates on outdated data will be ignored.

Another important consideration is investigating which conflicts can be avoided or better managed by the underlying work processes. If users have shared work orders for maintenance tasks but work in different locations, data conflicts are avoided by the defined work process. Or, if users need to track assets taken out of storage, other users could not take the same assets out because they are physically not there anymore. Exception handling can be significantly simplified in these cases. Hence, you need to investigate these scenarios and take them into consideration when defining potential conflicts for offline functions on shared data.


Auto-sync versus manual sync

Modern apps have gotten users accustomed to automatic synchronization processes that run in the background. The user does not need to worry about data updates and is not blocked from doing work. This is very convenient, and many clients expect this approach as standard today. However, this might not be the best approach in complex synchronization scenarios. In cases with long sync processing times, the app might be in an inconsistent state, especially if the user can work on the data while the sync process is running in the background. Many new exception handling scenarios can occur that are difficult to handle. If a large amount of data needs to be synced, the user might need to be able to control when and where it starts to download. And if data modifications are cached, the user would need to gain the control to reliably send them back to the server within a certain amount of time, such as before the end of a work shift.

For all of this, it’s often advisable to apply a manual synchronization process that the user can control. If this sync blocks the app, many nasty conflicts can be avoided. This leads to more reliability and certainty for the user about what data is synced and when. As an example, users can manually start a sync when they come to work in the morning and prepare to go in the field. After the sync has finished, they can be sure their data is up-to-date and they don’t need to worry about being offline for the rest of the day.


Push versus pull

Another major topic to look at is how to initiate the synchronization. The two major approaches are pushing the sync by the server and pulling it by the client.

Push is usually used for small changes so that every time a small piece of data is modified, the server sends out a push notification to all clients. The advantage of this approach is that data is only synced when needed and is updated very quickly. However, using this approach requires careful considerations. Standard push notification mechanisms do not guarantee if and when the client receives them. Also, if more than one notification is in the queue, it doesn’t ensure a sequential order. Moreover, when a client is offline for awhile, notifications might get lost or the queue might grow larger.

Having many notifications in the queue is generally a problem no matter if it’s caused by long offline times or a lot of small changes delivered by the server. Processing times to step through the queue and process all data updates can take a long time on the client devices, and the results for each client can significantly differ and be inconsistent due to different processing order and latency times.

The other approach — a pull mechanism — provides more reliability. A client contacts the server and requests all data updates since the latest request. This can either happen automatically in defined intervals, or the user can manually initiate it. Although the disadvantage is that the client data is not in sync in between the pull requests, the higher reliability outweighs this disadvantage in many scenarios. The only situation in which we successfully used a push mechanism was for the synchronization of a back-end system with data in the mobile middleware. However, for the synchronization of larger numbers of mobile clients only, a pull mechanism has proven to be reliable so far for us.

Offline data handling and data synchronization is a complex topic for many business scenarios. Not only do organizations need to investigate the technical options, but even more importantly, they need to fully understand every detail of the business requirements to deliver viable solutions. The solution needs to work technically, be usable, and be in accordance with the business requirements and processes of the users. This can only be achieved by considering all options, even if at first glance, they do not seem to be state-of-the-art.