Approaching Reactivity with Discipline — I
Table of contents
Developing reactive interfaces from the ground up (1/3)
In this 3 part series we will look at reactivity, what does declarative approach of modern frameworks and libraries mean, how to build and structure reactive interfaces in a disciplined manner, and get to know further optimisation procedures.
Modern web frontend frameworks/libraries are touted as declarative in nature. To put it simply, one doesn’t need to manually handle DOM (Document Object Model) operations for data to be visible in their UI manifestation(s). The framework/library does that automatically.
Certain frameworks and libraries mention that they make use of V-DOM. V-DOM (Virtual-DOM) is an in-memory DOM representation that is intended to limit the number of mutations (via various diffing algorithms) which would have otherwise been performed on the actual DOM (if operations were to be imperative instead), while making sure that all updates on data attributes (a.k.a. state variables) match their corresponding updated HTML element representation(s) in the DOM.
And as we all have been told: DOM operations are costly. Or are they?
This is where it gets weird. If V-DOM is an abstraction over the DOM, then can claims of minimum differential mutation being performant, even be a practical justification of using it over manual introspection and updates?
Well, it turns out that either we have been tricked into thinking about performance issues of DOM operations (and subsequently performance benefits of V-DOM), or that we have been looking at it the wrong way all along.
Let’s break it down.
Reactivity
All of this hullabaloo (inclusive of a library’s or a framework’s declarative nature) has been bestowed upon us by the underlying key implementation of what is otherwise known as reactivity.
In its simplest form, reactivity means a change in value will automatically reflect upon other computations dependent on that value rather than manually re-triggering those dependent computations. It is abundantly observed in nature in various shapes and forms but has historically been quite difficult to achieve in computing systems.
In an environment specifically like a JS runtime, reactivity has to be emulated.
The perception of viewing the UI as a function of state, i.e., f(state) = UI, revolves around the central concept of a flow, it being one or different forms of the almost universal format of Model -> View -> Update (i.e., MVU). This is to be read as: an update to the model via the view, will update the model and, render a new view reflecting the updated model, which will wait for any new update signal(s) to be triggered, via the view.
Now, coming to the wrong perception regarding DOM and V-DOM.
The Real Problem
To put it bluntly, it never actually was about seeing V-DOM as a performance optimisation mechanism, but it was rather the discipline to keep the UI in sync (manually) with the data which was otherwise cumbersome to do manually, sometimes even error-prone and sometimes even chaotic.
The reason why libraries and frameworks exist is that, at the heart of it, they can be applied to generic scenarios no matter how specific the requirements may be for that particular scenario for the library/framework to exist in the first place.
In this specific scenario of declarative nature of web-development, the generic outcome is that the UI automatically updates the entire component hierarchy, and the library/framework goes so far as to re-render the entire hierarchy if it has to, irrespective of whether a specific attribute attached to the UI has changed or not. Don’t worry about cost, because the V-DOM will catch the difference and will only update parts of the actual DOM if those respective values have changed.
Some rely on V-DOM (e.g. React and similar API compatible libraries, Elm, etc.) and re-render irrespectively, delegating the operation of minimising mutations to the reconciliation mechanism of their V-DOM implementation. Some attach contexts (using Proxies) to components (e.g. Vue) and catch the changes beforehand to efficiently minimise re-rendering the entire UI.
Aside: Elm goes so far as to recommend having a single flat Model representing almost the entire web-app, as much as possible. The problem is that although there’s nothing wrong with this approach and that with discipline one might eventually find this helps them, oftentimes data comes in nested and/or unrelated format where flat model may not be the right way to structure a model; and in this strictly Haskell inspired FP language, there’s no good way to update deep nested properties without lenses (and trust me, you’ll be writing more lenses to access deep properties than focus on just updating them instead).
It is the innate human nature to rely on provided and/or existing mechanisms that make our lives easier (or worse, depending on which way you see).
Modern web frameworks and libraries have evolved a lot to the point that really complex applications can be constructed with relative ease. Lots of edge cases, lots of scenarios related to nested components, and lots of performance profiling have been done and worked upon to get to the point where we stand today.
However, after all this, the question still remains: do we really need frameworks/libraries to achieve reactivity and subsequently declarativity for applications that can otherwise be developed through discipline and structure?
Head on to the <a href="https://www.getdefault.in/post/approaching-reactivity-with-discipline-ii" style="color:#0057FF; text-decoration:none;">second part</a> of this series and see how we can approach this problem in a disciplined manner.
Developing reactive interfaces from the ground up (1/3)
In this 3 part series we will look at reactivity, what does declarative approach of modern frameworks and libraries mean, how to build and structure reactive interfaces in a disciplined manner, and get to know further optimisation procedures.
Modern web frontend frameworks/libraries are touted as declarative in nature. To put it simply, one doesn’t need to manually handle DOM (Document Object Model) operations for data to be visible in their UI manifestation(s). The framework/library does that automatically.
Certain frameworks and libraries mention that they make use of V-DOM. V-DOM (Virtual-DOM) is an in-memory DOM representation that is intended to limit the number of mutations (via various diffing algorithms) which would have otherwise been performed on the actual DOM (if operations were to be imperative instead), while making sure that all updates on data attributes (a.k.a. state variables) match their corresponding updated HTML element representation(s) in the DOM.
And as we all have been told: DOM operations are costly. Or are they?
This is where it gets weird. If V-DOM is an abstraction over the DOM, then can claims of minimum differential mutation being performant, even be a practical justification of using it over manual introspection and updates?
Well, it turns out that either we have been tricked into thinking about performance issues of DOM operations (and subsequently performance benefits of V-DOM), or that we have been looking at it the wrong way all along.
Let’s break it down.
Reactivity
All of this hullabaloo (inclusive of a library’s or a framework’s declarative nature) has been bestowed upon us by the underlying key implementation of what is otherwise known as reactivity.
In its simplest form, reactivity means a change in value will automatically reflect upon other computations dependent on that value rather than manually re-triggering those dependent computations. It is abundantly observed in nature in various shapes and forms but has historically been quite difficult to achieve in computing systems.
In an environment specifically like a JS runtime, reactivity has to be emulated.
The perception of viewing the UI as a function of state, i.e., f(state) = UI, revolves around the central concept of a flow, it being one or different forms of the almost universal format of Model -> View -> Update (i.e., MVU). This is to be read as: an update to the model via the view, will update the model and, render a new view reflecting the updated model, which will wait for any new update signal(s) to be triggered, via the view.
Now, coming to the wrong perception regarding DOM and V-DOM.
The Real Problem
To put it bluntly, it never actually was about seeing V-DOM as a performance optimisation mechanism, but it was rather the discipline to keep the UI in sync (manually) with the data which was otherwise cumbersome to do manually, sometimes even error-prone and sometimes even chaotic.
The reason why libraries and frameworks exist is that, at the heart of it, they can be applied to generic scenarios no matter how specific the requirements may be for that particular scenario for the library/framework to exist in the first place.
In this specific scenario of declarative nature of web-development, the generic outcome is that the UI automatically updates the entire component hierarchy, and the library/framework goes so far as to re-render the entire hierarchy if it has to, irrespective of whether a specific attribute attached to the UI has changed or not. Don’t worry about cost, because the V-DOM will catch the difference and will only update parts of the actual DOM if those respective values have changed.
Some rely on V-DOM (e.g. React and similar API compatible libraries, Elm, etc.) and re-render irrespectively, delegating the operation of minimising mutations to the reconciliation mechanism of their V-DOM implementation. Some attach contexts (using Proxies) to components (e.g. Vue) and catch the changes beforehand to efficiently minimise re-rendering the entire UI.
Aside: Elm goes so far as to recommend having a single flat Model representing almost the entire web-app, as much as possible. The problem is that although there’s nothing wrong with this approach and that with discipline one might eventually find this helps them, oftentimes data comes in nested and/or unrelated format where flat model may not be the right way to structure a model; and in this strictly Haskell inspired FP language, there’s no good way to update deep nested properties without lenses (and trust me, you’ll be writing more lenses to access deep properties than focus on just updating them instead).
It is the innate human nature to rely on provided and/or existing mechanisms that make our lives easier (or worse, depending on which way you see).
Modern web frameworks and libraries have evolved a lot to the point that really complex applications can be constructed with relative ease. Lots of edge cases, lots of scenarios related to nested components, and lots of performance profiling have been done and worked upon to get to the point where we stand today.
However, after all this, the question still remains: do we really need frameworks/libraries to achieve reactivity and subsequently declarativity for applications that can otherwise be developed through discipline and structure?
Head on to the <a href="https://www.getdefault.in/post/approaching-reactivity-with-discipline-ii" style="color:#0057FF; text-decoration:none;">second part</a> of this series and see how we can approach this problem in a disciplined manner.
Give your product vision the talent it deserves
building your dream engineering team.