Data synchronization

From wiki.gis.com
Jump to: navigation, search

Data synchronization is the process of establishing consistency among data on remote sources and the continuous harmonization of the data over time. It is fundamental to a wide variety of applications, including file synchronization and mobile device synchronization e.g. for PDAs[1].

Practical solutions

There are tools available for File synchronization, Version Control (CVS, Subversion, etc.), distributed filesystems (Coda, etc.), and mirroring (rsync, etc.), in that all these attempt to keep sets of files synchronized. However, only version control and file synchronization tools can deal with modifications to more than one copy of the files.

  • File synchronization is commonly used for home backups on external hard drives or updating for transport on USB flash drives. The automatic process prevents copying already identical files and thus can save considerable time from a manual copy, also being faster and less error prone.[2]
  • Version control tools are intended to deal with situations where more than one person wants to simultaneously modify the same file, while file synchronizers are optimized for situations where only one copy of the file will be edited at a time. For this reason, although version control tools can be used for file synchronization, dedicated programs require less overhead.
  • Distributed filesystems may also be seen as ensuring multiple versions of a file are synchronized. This normally requires that the devices storing the files are always connected.[citation needed]
  • Mirroring: A mirror is an exact copy of a data set. On the Internet, a mirror site is an exact copy of another Internet site. Mirror sites are most commonly used to provide multiple sources of the same information, and are of particular value as a way of providing reliable access to large downloads.

Synchronization can also be useful in encryption, by synchronizing Public Key Servers.[3]

Theoretical models

Several theoretical models of data synchronization exist in the research literature, and the problem is also related to problem of Slepian-Wolf coding in information theory. The models are classified based on how they consider the data to be synchronized.

Unordered data

The problem of synchronizing unordered data (also known as the set reconciliation problem) is modeled as an attempt to compute the symmetric difference S_A \oplus S_B = (S_A - S_B) \cup (S_B - S_A) between two remote sets S_A and S_B of b-bit numbers.[4] Some solutions to this problem are typified by:

Wholesale transfer
In this case all data is transferred to one host for a local comparison.
Timestamp synchronization
In this case all changes to the data are marked with timestamps. Synchronization proceeds by transferring all data with a timestamp later than the previous synchronization.[5]
Mathematical synchronization
In this case data are treated as mathematical objects and synchronization corresponds to a mathematical process.[4][6][7]

Ordered data

In this case, two remote strings \sigma_A and \sigma_B need to be reconcilied. Typically, it is assumed that these strings differ by up to a fixed number of edits (i.e. character insertions, deletions, or modifications). Then data synchronization is the process of reducing edit distance between \sigma_A and \sigma_B, up to the ideal distance of zero. This is applied in all filesystem based synchronizations (where the data is ordered). Many practical applications of this are discussed or referenced above.

It is sometimes possible to transform the problem to one of unordered data through a process known as shingling (splitting the strings into shingles[clarification needed]).[8]

See also

  • SyncML, a standard mainly for calendar, contact and email synchronization

Notes

  1. Agarwal, S.; Starobinski, D.; Ari Trachtenberg (2002). "On the scalability of data synchronization protocols for PDAs andmobile devices". Network, IEEE 16 (4): 22–28. doi:10.1109/MNET.2002.1020232. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1020232&isnumber=21950. Retrieved 2007-07-27. 
  2. A. Tridgell (February 1999). Efficient algorithms for sorting and synchronization. PhD thesis. The Australian National University. http://samba.org/~tridge/phd_thesis.pdf. 
  3. sks.dnsalias.net
  4. 4.0 4.1 Minsky, Y.; Ari Trachtenberg; Zippel, R. (2003). "Set reconciliation with nearly optimal communication complexity". Information Theory, IEEE Transactions on 49 (9): 2213–2218. doi:10.1109/TIT.2003.815784. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1226606. Retrieved 2007-07-27. 
  5. Palm developer knowledgebase manuals
  6. Ari Trachtenberg; D. Starobinski and S. Agarwal. "Fast PDA Synchronization Using Characteristic Polynomial Interpolation". IEEE INFOCOM 2002. doi:10.1109/INFCOM.2002.1019402. 
  7. Y. Minsky and A. Trachtenberg, Scalable set reconciliation, Allerton Conference on Communication, Control, and Computing, Oct. 2002
  8. S. Agarwal; V. Chauhan and Ari Trachtenberg (November 2006). "Bandwidth efficient string reconciliation using puzzles". IEEE Transactions on Parallel and Distributed Systems 17 (11): 1217–1225. doi:10.1109/TPDS.2006.148. http://ipsit.bu.edu/documents/puzzles_journal.pdf. Retrieved 2007-05-23.