interface EventSources[T -< None|...[EventSources[_]]]

interface Event[Sub[T] : Event[Sub,T], T -< None|...[EventSources[_]]] -< EventSources[T] {
observe : (() -> ()) -> Handle impure observe : (Emit[T|Sub] -> ()) -> Handle
  impure observeUnsafe : (Emit[None] -> ()) -> Handle }

interface EventVal[Sub[V,T] : EventVal[Sub,V,T], V, T -< None|...[EventSources[_]]] -< EventSources[T] {
observe : (V -> ()) -> Handle impure observe : (Emit[T|Sub] -> V -> ()) -> Handle
  impure observeUnsafe : (Emit[None] -> V -> ()) -> Handle }


coming soon...


coming soon...


coming soon...


coming soon...


Generally the composition of state transitions (i.e. function composition) causes the program's behavior to depend on the order and/or duplication of expressions in the code, a/k/a imperative programming.

Some of the order and/or duplication may be non-essential for achieving the intended behavior (i.e. semantics), which can be demonstrated by achieving the same program behavior in a more declarative model of programming.

Such non-essential order and/or duplication is an arbitrary, opaque semantics that the programmer is not modeling and actively managing, and thus causes deviations from the intended program behavior.

Declarative Programming

If a program's behavior does not change when reordering and/or duplicating its expressions, then it has the commutative and idempotent properties.

The benefit of the commutative and idempotent properties is that ordering and duplication in expression composition does not cause changes to the program's behavior.

However, a key realization is that declarative programming is not about achieving commutative and idempotent properties every where by eliminating all state, i.e. not about eliminating imperative programming, but instead to eliminate dependencies on ordering and duplication which are not in the scope of the intended imperative semantics.

Precisely, 100% declarative programming is where the actual semantics (e.g. due to ordering and duplication) exactly equals the intended semantics (e.g. due to ordering and duplication). Any lack of commutativity or idempotence is intended and transparent, i.e. occurs only in the intended semantics.

Precisely, imperative programming is the (both intended and unintended) dependencies on ordering and duplication in the actual semantics.

An example of declarative (i.e. intended) order is z-index in CSS. A hypothetical example of declarative (i.e. intended) duplication would be instancing of an HTML tag with a unique id attribute if HTML was idempotent.

Of course, 100% declarative programming does not exist. The goal is to decrease the amount of unintended semantics, i.e. to reduce the unintentional (i.e. opaque) loss of commutativity and idempotence.

Deterministic Semantics

Assuming no random inputs, the program's behavior is always deterministic (repeatable from initial conditions) w.r.t. the code as the observer. Fortunately1 this total observer never exists for program coded in a language with unbounded recursion, i.e. Turing-complete.

The problem is that due to the Halting theorem, it is impossible to sample (test) all the behavior of such a program. Thus the programmer must rely on a model of the expected or intended semantics.

If there is ordering and duplication that is ‘hidden’ (opaque) from the programmer's model of the intended semantics, e.g. see impure functions below, the unintended effects to the program's behavior appear to the programmer as random or indeterminism (Murphy's Law) in his model of cause and effect between code changes and program behavior effects. This is aliasing error because the programmer's model is not incorporating infinite samples (due to Halting theorem) of all of the actual semantics. A bug is an arbitrary, indeterminate (i.e. aliasing), deterministic (i.e. repeatable) condition relative to the program code and repeatable environment, but since the entropy of the state machine of the actual semantics is tending to maximum, the aliasing appears to be random error for the programmer's intended semantics. This is due to the fact of the irreversibility of time (and universal entropy). The 1856 second law of thermodynamics tells us that entropy (i.e. disorder, or the number of independent possibilities) is tending to maximum forever. Coase's theorem tells us that closed systems are subverted by the universal entropic force tending to maximum degrees-of-freedom. In short, the environment is never perfectly repeatable and never perfectly closed, i.e. the distributed case (below) always exists.

The goal of declarative programming is to reduce the gap between the sampling rate of the programmer's intended (transparently declared) semantics and the actual semantics due to unintended (opaque) ordering and duplication dependencies.

1A total observer would mean knowledge is static. Since software is the encoding of knowledge, software would become static. A person could eventually know everything. Then I posit that nothing would exist.

Pure Functions

The only allowed inputs to a pure function (PF) are any immutable external state and the PF's parameter list, whose values must be fully determined before the PF is called. The only allowed output is the PF's return value.

PFs may have systemic side-effects which are not directly accessible in the code.

PFs may also have side-effects which are directly accessible, because the expressions inside a PF are not guaranteed to be commutative nor idempotent.

For example, if the expressions within a PF f are composed of calls to PFs, and if f returns a data structure that has ordered and/or duplicate expressions, then the dependent expressions g() in f are not commutative nor idempotent.

In return List(g(1), g(2), g(2), g(3)), the order of the expression calls g() to the PF g determine the order in the returned list, so reordering them and/or eliminating the duplicates will change the behavior of the program.

Although the programmer may be intending the semantics of the list order in the above example, in the general case a PF can leak to its caller some ordering and duplicates that are not essential to the intended semantics, and thus can cause accidental, undetected, and thus unmodeled program behavior.

This hidden leakage is much more severe with impure functions, because their interaction with external mutable state (i.e. not the parameter list and return value) is not visible in the function call expression, i.e. the expression is referentially opaque.

Referential Transparency

Expressions are referentially transparent (RT) when every duplicate can be replaced with the result of evaluating the expression once (and the program's behavior is unaffected by this substitution). Every call to a PF is RT, except w.r.t. to any systemic side-effects.

The composition of the values of RT expressions is not idempotent nor commutative, because the definition of RT says nothing about the relative compositional ordering of expressions, nor about deleting duplicate expressions (deleting is not the same as substitution by its evaluated value). The prior example of the functional composition for the ordered list could employ RT sub-expressions, i.e. g(1), g(2) and g(3), which are clearly not idempotent nor commutative w.r.t. to the order in the example list. W.r.t to composition of their values, RT only allows the duplicates to commute, e.g. g(2) and g(2) can be transposed, which is a trivially useless form of commutativity. Idempotence requires that duplicate expressions don't change program behavior, thus deleting duplicates must not change program behavior. And clearly if one of the g(2) is deleted from the example list then the program's behavior will have changed.

Note, RT expressions are commutative w.r.t. to their evaluation order, but RT expressions are useless (i.e. they make no declaration) if their values are not composed. From the definition of RT, RT expressions do nothing if their values are not stored (i.e. composed), therefor evaluating an RT expression is irrelevant to the program's behavior. It the composition of RT expressions that gives the program a behavior, and composition of RT expressions is not commutative nor idempotent.


To implement efficient PFs, the mutation of values in immutable data structures can be efficiently modified without making an entire copy.

Immutable data structures combined with PFs provide deadlock-free reentrancy, i.e. building blocks for nondeterministic concurrency such as Actors.

Applying a PF to an element of an immutable collection is a RT expression, thus given that RT and the immutability, it is commutative over the all elements (e.g. map), thus providing deterministic parallelism.

Higher-Level Semantics

Unintended ordering and duplication dependencies (i.e. unintended imperative semantics) can occur due to inability to express the exact intended semantics. Higher-level semantic models can solve these semantic impedance mismatch issues. Where the existence of unintended imperative semantics is reduced, the resonance of the intended semantics is increased and semantic impedance (programming friction) is reduced.

A programming language is low level when its programs require attention to the irrelevant.
—Alan Perlis, first recipient of the Turing Award

For example, the State (a Monad) eliminates the boilerplate of threading some state across functions that don't access that state so as to compose these with functions that do access that state. This allows reuse of functions without having to code separate functions for each permutation of state that one might one to compose with. The State monad is roughly analogous to a threadable scope for global variables, i.e. that state is encapsulated, so that state is only accessible to functions in the composition, and thus the functions are pure at w.r.t. that state. Thus the state effects are transparent (the functions operate on that state via inputs and return value only), delimited, and composable.

Some have noted (and another) that monadic composition is imperative because it expresses an order, simultaneously implying a distinction from composition of RT calls of PFs. But in general PF composition (i.e. RT calls of PFs) expresses order and duplication, e.g. in f(g(x), g(x), h(x)) the PFs g and h must execute before f2 and the tuple (g(x), g(x), h(x)) expresses order and duplication. Declarative and imperative programming are not antithetical. Declarating programming is focused imperative programming. Declarating programming eliminates unintended imperative dependencies, so that the intended imperative dependencies are more resonant to program behavior.

2In a call-by-need (a/k/a lazy) evaluation strategy, e.g. Haskell,  g and h would not necessarily always be executed before f completes, nevertheless the tuple expresses order and duplication. In the lambda calculus, the tuple pair is the building block for a list. Tackling the Awkward Squad erroneously implies (c.f. “only where we want”) in the Introduction that lazy, pure functions are never imperative without monadic composition, notwithstanding it correctly explains in section 2.5 that monadic composition can simulate strict evaluation strategy in a lazy language but it doesn't actually create an impure mutation of a variable as an impure function would allow, i.e. PF mutation is a state transition where the return value is a modified copy of input.

Modeling Change

State is any value that changes.

Theorem: Changing values are input to the top-level PF of the program, or each value is stored in a mutable variable by an inpure function some where in the program.

For the PF portion that operates on these values, remember the State monad enables mixed composition of PFs that do and do not operate on some state. The input and return value of the State monad bind is either input to and returned from the top-level PF or stored in a mutable variable by an impure function. The State monad enables the PF application of changes to values in a nested functional composition without inputting and returning those values in every PF in the composition. The IO monad in Haskell similarly threads the interactive input and output through PFs, that remain pure3 because the imperative order and duplication of monadic composition is explicit and the only way to access the side-effects is in the single copy of the IO monad. Arrows (or generally concatenative and point-free programming) enable even more complex patterns of threading, multiplexing, and demultiplexing state.

3PFs that access the IO monad are not RT w.r.t. to the world state, because changes to the world can never be reversed, because the infinite world state is not storable (i.e. I/O can interact with other actors in the world). Whereas, State monad composition is RT, because each PF state transition mutation (from the immutable copy of the encapsulated state) could be repeated without changing the program behavior, and the definition of RT says nothing about the relative compositional ordering of expressions.

Propogation Indeterminancy

Regardless of whether state is threaded through PFs and/or stored in mutable variables, state always propogates as a reactive data flow. A key realization is that data flows are determinant only if they don't contain a cycle.

Indeterminancy of state propogation can result in loss of spatial declarativity, divergence, or deadlocks.

The available models of change fit into two general categories, signals and events.

Signals expose their values relative to time, i.e. a function Time -> Value, and that function may observe other signals thus forming a directed graph. Normally signals will only contain non-blocking operations (e.g. writing to state never blocks), thus a cycle can cause an infinite loop (divergence) and/or loss of spatial declarativity (loss of spatial commutativity since ordering creates cycles), but not a deadlock. The time input value is constant (i.e. an instant on the virtual time axis) during the acyclic portion of the directed propogation, thus all acyclically directed signals update deterministically. However, any concurrent writes (at the same virtual time instant) to state can result in indeterminate values. Also if two signals write to the same state (including writing to the state of graph connections) then indeterminancy results (in the value of the state at that same time instant). To preserve spatial declarativity, arbitrary rules can transform the indeterminance into unintended (opaque, i.e. less declarative) program behavior.

Events call each of their observers' callback function when a change to the observed state occurs. Event state can be valueless, e.g. mouse click. A callback function can be observing multiple events, it can add and remove observers, and it can write to state causing events to trigger. A cycle can either be a certain deadlock or a potential deadlock. Events are not spatially declarative, because they propogate instantly in real-time, i.e. callback functions do not input a virtual time parameter.

Cycles can be prevented at compile-time, where we can declare which (signals or events) are allowed to contribute to a change (of a signal or event). Thus observers that can contribute to a change (of a signal or event), would not be allowed to observe that (signal or event). For signals, this type information must be added to both signals and shared state, since both can be observed by signals. For events, the type information only needs to be added to events, since an event cannot observe writable state separately from triggering the event.

For signals in open, distributed systems, acyclic signal graphs cannot be insured at compile-time (and possibly not even detected at runtime), due to the open shared state being in the graph. Events have this spatial indeterminism implicitly expressed in their real-time order. Also, events can deadlock in this open, distributed case. Employing signals to accomplish similar semantics to events with distributed state, will also be able to deadlock.

Subjective Conclusion

For this state model, events are chosen, because: