It's not just simple appendChild calls. I actually worked on an app which updated a large table – displaying file metadata, checksums calculated in web workers, etc. for a delivery – and found React to be around 40+ times slower than using the DOM[1] or even simply using innerHTML, getting worse as the number of records increased.
The main trap you're falling prey to is the magical thinking which is sadly prevalent about the virtual DOM and batching. Basic application of Amdahl's law tells us that
the only way the React approach can be faster is if the overhead of the virtual DOM and framework code is balanced out by being able to do less work. That's true if you're comparing to, say, a primitive JavaScript framework which performs many unnecessary updates (e.g. re-rendering the entire table every time something changes) or if the React abstractions allow you to make game-changing optimizations which would be too hard for you to make in regular code.
Since you mentioned batching, here's a simple example: it's extremely hard to find a case where a single update will be faster because the combined time to execute a JS framework and make an update is always going to be greater than simply making the update directly. If, however, you're making multiple updates it's easy to hit pathologically bad performance due to layout thrashing[2] when the code performing an update reads something from the DOM which was invalidated by an earlier update, requiring the browser to repeatedly recalculate the layout.
That can be avoided in pure JavaScript by carefully structuring the application to avoid that write-read-write cycle or by using a minimalist library like Wilson Page's fastdom[3]. This is quite efficient but can be harder to manage in a large application and that's where React can help by making that kind of structure easier to code. If you are looking for a benchmark where React will perform well, that's the area I'd focus on and do by looking at both the total amount of code and the degree to which performance optimizations interfere with clean separation, testability, etc.
EDIT: just to be clear, I'm not saying that it's wrong to use React but that the reasons you do so are the same as why we're not writing desktop apps entirely in assembly: it takes less time to build richer, more maintainable apps. The majority of web apps are not going to be limited by how quickly any framework can update the DOM.
1. I partially reduced that to a smaller testcase in
https://gist.github.com/acdha/092c6d79f9ebb888496c which could use more work. For simple testing that was using JSX inline but the actual real application used a separate JSX file compiled following normal React practice.
React allows you to optimize as much as you want while still keeping the component nature. In this case, you realize you need infinite appends.
You make a component that puts a reference div into the DOM. Next you override the default shouldComponentUpdate so when you get new data, you create the raw DOM elements using a document fragment and this.refs['elem'].appendChild(newDom) -- 0.14 syntax.
Premature optimization is usually bad. React allows you to write your app and then go as deep as necessary when optimizing later. The fact that you can do this while still keeping within react is a testament to the power of the framework/library.
The fact that a Google employee who pushed web components has a problem with a framework he doesn't know in a case that should usually be avoided without the optimizations that are possible says more about him than the framework he is criticizing.
> The fact that a Google employee who pushed web components has a problem with a framework he doesn't know in a case that should usually be avoided without the optimizations that are possible says more about him than the framework he is criticizing.
Or possibly that you haven't paid enough attention to what he wrote. He was very clear to mention that React's productivity wins are significant but wanted to make a point about how important it is to regularly test performance rather than just assuming hype is universally true – or, conversely, that people talking about native browser performance have done the broad, valid benchmarking needed to support sweeping general claims.
As example of the difference, you're reacting defensively trying to downplay real concerns which are easily encountered on any large project and attack the source rather than engage with the actual demonstrated problem. That might feel good but unfortunately problems aren't fixed by pointing out that the reporter works for what you perceive as The Other Team.
I'm glad to see actual React developers are responding differently by trying to improve performance on weak points:
The main trap you're falling prey to is the magical thinking which is sadly prevalent about the virtual DOM and batching. Basic application of Amdahl's law tells us that the only way the React approach can be faster is if the overhead of the virtual DOM and framework code is balanced out by being able to do less work. That's true if you're comparing to, say, a primitive JavaScript framework which performs many unnecessary updates (e.g. re-rendering the entire table every time something changes) or if the React abstractions allow you to make game-changing optimizations which would be too hard for you to make in regular code.
Since you mentioned batching, here's a simple example: it's extremely hard to find a case where a single update will be faster because the combined time to execute a JS framework and make an update is always going to be greater than simply making the update directly. If, however, you're making multiple updates it's easy to hit pathologically bad performance due to layout thrashing[2] when the code performing an update reads something from the DOM which was invalidated by an earlier update, requiring the browser to repeatedly recalculate the layout.
That can be avoided in pure JavaScript by carefully structuring the application to avoid that write-read-write cycle or by using a minimalist library like Wilson Page's fastdom[3]. This is quite efficient but can be harder to manage in a large application and that's where React can help by making that kind of structure easier to code. If you are looking for a benchmark where React will perform well, that's the area I'd focus on and do by looking at both the total amount of code and the degree to which performance optimizations interfere with clean separation, testability, etc.
EDIT: just to be clear, I'm not saying that it's wrong to use React but that the reasons you do so are the same as why we're not writing desktop apps entirely in assembly: it takes less time to build richer, more maintainable apps. The majority of web apps are not going to be limited by how quickly any framework can update the DOM.
1. I partially reduced that to a smaller testcase in https://gist.github.com/acdha/092c6d79f9ebb888496c which could use more work. For simple testing that was using JSX inline but the actual real application used a separate JSX file compiled following normal React practice.
2. See e.g. http://wilsonpage.co.uk/preventing-layout-thrashing/
3. https://github.com/wilsonpage/fastdom