The Unum Pattern can be described as a single conceptual object with a distributed implementation. An object, called the "master presence", is replicated at different locations using smart proxies, called "presences". The presences cache some state of the master presence and can be used to communicate with the master presence.
Note: At this point this page is directly copied from MarkM's post to the E-Lang mailing list
The Unum Pattern goes back to Chip Morningstar's work at Electric Communities.
Each replica of an Unum is a "presence" of the Unum, and all the presences jointly are taken to form the Unum. One of the presences is the "authoritative presence" -- its state is considered to be the "true" state of the Unum. A1, being the initial presence, therefore starts out as the authoritative presence.
The other presences are effectively smart remote references to the authoritative presence. These "shadow presences" maintain a somewhat stale cache of a copy of some state from the authoritative presence -- but only state that can be useful even when it's stale. These shadow presences do support immediate calls for those operations that can be sensible performed using this stale data -- giving us another huge victory over network latency. But operations needing accurate state must still be eventual, and must be delegated to the authoritative presence.
The shadow presences also register themselves as observers (in E, "reactors") on the authoritative presence. Every time the authoritative presence changes replicated state, it notifies all its reactors, so that they may update their cached copies. In the absence of partition, we can say that these caches are always "eventually consistent" -- they are always consistent with some past state, they only move forward in time, and under quiescence they will always eventually become accurate. (Does this capture Lamport-like eventual consistency?)
During a partition, the presence can still give correct, even if increasingly stale, service for the staleness tolerant operations. Of course, it must refuse the accurate operations. Should the authoritative presence again become reachable, the shadow should "heal". (Note: at EC we didn't do this. Instead, we always invalidated shadow presences on partition. So although both choices seem valuable, we don't yet have any experience with shadows that survive partition.)
What happens when a shadow presence A2 is passed? Two simple possibilities are
- a new shadow presence A3 is created that takes the authoritative presence A1 as authoritative. A2 and A3 would both be registered as reactors on A1.
- a new shadow presence A3 is created that takes shadow presence A2 as authoritative. A2 is a reactor on A1, and A3 is a reactor on A2.
1. is Granovetter introduction, and supports grant matching. 2. is proxying, and does not.
Answer #1 gives us a flat multicast fanout for state updates. Answer #2 turns the presences into a spontaneously malformed multicast tree. (I say "malformed" because the topology of the tree is based only on acts of introduction, and not on any sensible performance issues.) NetNews, DNS, and Notes are all massively scalable systems that use Lamport-like eventual consistency to distribute state updates.