I can't say exactly when, but I was writing it in itself before I made the initial commit on the public repo, so at least 6 months ago. Every change I made since then (in addition to the plugins I wrote) would have been written in lite itself.
As the editor is written mostly in Lua, with C taking care of the lower level parts, plugins can typically customise anything limited to what is exposed by Lua and the C API. Beyond adding custom commands, plugins can also do things like patch straight in the DocView's line-drawing function to draw additional content:
The treeview at the left of the screen is implemented as a normal plugin, and like any other plugin can be removed from lite by simply deleteing the `treeview.lua` file.
I support this! Have been trying to use VS Codium[1], but R Studio keeps pulling me back in. An alternative with some nice R plugins would be fantastic.
Not even that: SDL just provides a pixel buffer, the application draws everything itself per-pixel. Lite uses a technique I refer to as "cached software rendering" which allows the application code to be written as if it's doing a fullscreen-redrawn when it wants to update, the renderer cache (rencache.c) then works out which regions actually need to be redrawn at the end of the frame and redraws only those. You can call `renderer.show_debug(true)` to show these redraw regions: https://youtu.be/KtL9f6bksDQ?t=50
That's not anyhow different from typical GDI, CoreGraphics, GTK/Cairo way of doing rendering.
Windows, MacOS and GTK maintain internal pixmap buffer for a window.
And when needed you call InvlidateRect(wnd,rc) and receive WM_PAINT with cumulative rect to update.
Personally, I would create an abstraction that wraps couple of functions of GDI, CoreGraphics and Cairo and use it instead manual pixmap rendering. Will be faster and more flexible. All that UI can be rendered by just 2..3 functions FillRect, DrawText, MeasureText.
It's quite different, actually, both in application programming logic and in expressive power of drawing primitives.
With dirty rectangles, it's the application's responsibility to minimize much of its drawing; the renderer will at best avoid copying bits where the target of the bits lies outside the drawing rectangle. The renderer can only optimize in situations where the entire call is understood as a full primitive, and it knows that the result will lie outside the dirty rectangle.
With rxi's approach, the application gets to define the commands which update the UI - which may be as complex as desired, as long as they have a calculable rectangle - and the cost of calculating that rendering can be skipped, without needing to query for dirty rectangles or doing any application-side conditional logic, beyond the layer that rxi wrote.
It's particularly powerful if the rendering primitives are higher level than those provided by the native APIs.
FWIW this is basically "dirty rectangles" which was a very common technique for avoiding full screen updates in games back when the hardware wasn't fast enough to do that.
You're equating the final stage of this approach to the entire approach. The point of this technique is that you get the benefits you typically would from dirty rectangles without the burden of the bookkeeping you would traditionally have. Using this technique your application "redraws" everything as if it's drawing it fresh each frame and the renderer cache takes care of determining what's actually changed.
Typically with dirty rectangles you would have to manage this state in the application code, for example, determining that line X was edited then updating the region for that line, or determining that view Y moved and updating a dirty rectangle based upon it's previous and current positions.
It isn't. Normally the application needs to be aware of dirty rectangles; it fetches them from the window compositor, and it needs to limit its drawing calculations based on the dirty rectangle, in order to get the full benefit.
(Dirty rectangles are still perfectly common in desktop apps for when you e.g. drag a window back on screen after overlapping the edge, or if you scroll a window.)
Nah, it's tile-based concurrent precompositing, like web browsers do. Each tile knows what's in it (i.e. what set of DOM elements); and subscribes to state-change events for those DOM elements; and any such state-change event will trigger the tile to re-render its cached texture. Then, on each frame, all you have to do to draw everything, is to grab the latest cached texture from each tile, and blit those (or set them up as a grid of flat-projected rects in screen-space, if you're in 3D-semantics land.)
You can get additional benefits from this approach, by doing multiple layers of it (e.g. having scrollable surfaces have their own tiles that precompute the inner-document-viewport-space rather than the outer-viewport-space, such that the inner tiles aren't invalidated by scrolling the outer viewport.) This technique ends up forming a tree of tiles, where tiles higher up the tree, when invalidated, re-render trivially by compositing tiles further down the tree into themselves. Thus, another name for this approach is a "precompositing tree."
The difference between this approach and dirty rects, is in the direction of information flow. In tile-based precompositing, the information only flows in one direction—from the user, through view-controller, into the DOM, to the tiles, and then out the display. Dirty rects, meanwhile, are a signal sent backwards, from the display system to the program, essentially telling it that the display system lost/discarded the information needed to re-draw an area, so could the program please send it over again. (The program doesn't even have to re-render in response; some dirty-rects implementations, like X11's DAMAGE extension, just involve the client application re-transmitting pixbuf data to the server from its own precomposited buffer.)
Also, dirty rects / screen-damage doesn't solve the problem of hardware not being fast enough; it solves the problem of hardware not having enough VRAM to do per-frame compositing from undamaged intermediates. In low-VRAM conditions, you can only keep around the final pre-composited image; and so any time you "damage" / make "dirty" a region of the screen (e.g. by removing an overdrawn element, which should have the semantics of revealing whatever was there before that element overdrew it) then you need to propagate a request back to the renderer to re-draw (and, for efficiency, re-draw just that region), because you don't just have an intermediate texture laying around for that window/stage-layer/etc. to re-source it from. If you did, then dirty-rects would never come into play, since you'd just re-composite everything each frame. (Which is cheap even on old-school CPU-only blitters—you just have to alternate which pixmap pointer you're basing your LOADs off of using either a rect-overlap check, or a mask-bitmap [which gets you 8 pixels' mask-states per LOAD.] Even the Gameboy can do it!)
Like the other guy, you are thinking about regions (which is what the X11 DAMAGE / WM_PAINT / etc stuff use). Dirty rectangles is a method used in older games where the game kept track of -usually- sprites on screen and whenever something changed (e.g. a sprite moving) that part of the screen was marked as dirty (often implemented as a list of non-overlapping rectangles, hence the name). Dirty rects also flows only in one direction.
Just an aside. Everyone should play with SDL and write a simple game that you have to draw your own pixels. It’s extremely satisfying. Do it in C. It’s pretty simple and fun.
I think this is correct, like in the sense that react uses a VDom. When you make changes, you sort of pretend that you are changing everything, but the rendering engine figures out the differences to the real DOM, based on the in-memory changes, and makes minimal edits to it. This is why you can use react with all kinds of things that aren't DOM or even web-related (react-native, react-blessed, react-babylonjs, etc.) I contributed to react-blessed & react-babylonjs, and wrote the main chunk of react-babylonjs's current fiber system. You essentially just use the VDOm to describe the full graph, and that graph doesn't have to be DOM at all.
I'm open to the possibility that I'm that wrong in my understanding, but this didn't help me understand any better at all.
The technique does sound similar to me. Both (as I understand it) maintain a representation in memory of the final rendering and use a diff to determine which parts of the rendering to perform. The "virtual DOM" technique isn't strictly tied to a browser DOM, though the term is a reference to that, and React's (in particular) has been adapted to many other rendering targets.
I'd be happy to learn more if you'd be kind enough to explain what I misunderstood.
A virtual DOM doesn't necessarily imply DOM existence; I don't buy that. It really depends on how persistent you want your "virtual" to be.
A virtual DOM, to me, means that the application renders by, every time, constructing data structures which are handed off to be reconciled with the display.
If, as in an HTML application, you render by means of a retained mode real DOM, then the reconciliation is via comparison of the virtual with the real. But that's not the only way to handle the output of the construction of a virtual DOM; it could figure out how those structures intersect with the dirty rectangle/s, and only render the subtree of the DOM which applies.
rxi's technique resembles a virtual DOM of depth 2 (1 root and everything is a child) and absolute positioning, though it's even closer to an OpenGL display list or combined vertex & command buffer. For that reason, I think it's a little bit of a stretch; not on the virtual DOM angle, but on the not particularly DOM-like nature of the drawing commands.
> The virtual DOM (VDOM) is a programming concept where an ideal, or “virtual”, representation of a UI is kept in memory and synced with the “real” DOM
I get the feeling you're reading this too literally. I think eyelidlessness is talking about using a technique that is analogous or similar to that of a virtual DOM. Nobody is talking about an actual DOM.
React Native uses DOM, the only nuance is that that DOM is a tree of native widgets/windows which is a perfect DOM.
Again, virtual DOM is a projection of real DOM in one form or another. It could be a tree of anything that can be represented by attributed nodes and leaves.
DOM tree has nothing with rendering and pixels, that's why "not even close". By using virtual DOM you can update (by diffing) some tree that even has no visual representation in principle - it is pure data structure. Think about abstract XML config that can be reconciliated with its virtual DOM.
I think you're using a term in a way it wasn't meant to be used. I tried to give this the benefit of the doubt, I did searches for uses of DOM outside of HTML and XML documents, and it's just not a term that's used generally for any tree representing nodes and leaves. There are much more general terms for those kinds of structures, and pretty much any program which maintains or creates structured data has some kind of representation of data with those kinds of relationships. But the DOM, as defined by the W3C:
> The Document Object Model (DOM) is a programming API for HTML and XML documents.
I was unable to find any other usage.
In reality, I think it would be more accurate to refer to the virtual DOM (at least React's, I haven't spent much time familiarizing myself with other implementations with the same naming) as a virtual output data structure, where the output may be rendered to a screen, it may be rendered to a string serialization, or any other output... but the role it plays (when it performs well) is to optimize output over time by minimizing changes pushed to its destination. One of those output targets is the DOM.
You chose to respond to one of my examples among many non-DOM React renderers, but another one very much has everything to do with rendering and pixels, and that's canvas.
And pixels, to a software, are just another data structure. Software doesn't emit light from an LED or a diode, it just provides data to a hardware which produces physical side effects.
Honestly, this has been an enlightening discussion, but primarily because I've been reminded that my instincts for engaging dismissive comments on the internet are there for a reason. I don't hope to convince you, I don't think any further engagement would be productive, have a nice weekend.
> virtual output data structure, where the output may be rendered to a screen.
Sigh. Virtual DOM has nothing with rendering.
Virtual DOM was introduced as a lightweight construct to generate and modify tree of nodes.
When the structure is generated it is used as a prototype for updating "master" DOM (or any other tree of nodes).
Any modern UI system is a tree of widgets/windows - child has one and only one parent. So vDOM can be applied as to HTML/XML DOM as to native UI tree of widgets/windows. That's why there are React, Native React and my native implementation of React in Sciter (https://sciter.com/docs/content/reactor/helloworld.htm) for that matter.
Just treat "DOM" as a short name for tree of nodes where each node has a) tag(or type or class) b) collection of attributes and c) collection of children. Nothing more, nothing less.
In any case, I have no idea where here is the place for "grid of pixels ".
[1] https://github.com/rxi/microui