Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks impressive, but proof will be in the actual day to day dev experience and configuration, not the perf. Vite and esbuild are fast enough, and I feel the winner will be more about usability, docs, easy config, etc.

That aside, it is just so frustrating and sad that this just continues the fragmentation in the JS build space. It is beyond exhausting at this point. I don't care about vite vs (or with) esbuild vs turbo. I just wanna `script/build` or `yarn dev` and not think about it anymore.

It seems like collaboration and consolidation around common tooling is just impossible and not even considered a possibility at this point. We are forever stuck in this world of special snowflake build toolchains for damn near every app that wants to use modern JS.



An explicit goal I would personally like to see from build tools in the JS world is long term stability. I don't want my build tools to have a new major version every year. Semantic configuration improvements simply aren't worth all the churn. Adding new functionality is fine and great, but keep it backward compatible for as long as you possibly can.

This is an area where we could learn something from the Golang ecosystem. You're always going to end up with some warts in your API. Tools with warts that are consistent, documented, predictable, and long-lasting are so much easier to manage than tools that are constantly applying cosmetic revamps.


Agreed. Every time NextJS changes out their build system for speed, its users lose out on all kinds of functionality that they were depending on before.

Moving away from Babel to SWC meant we could no longer use SCSS within JSX styled components. We first switched everything to plain CSS, which was a nightmare IMHO. Now slowly switching things to SCSS modules.

Now with Turbopack, we lose that too: https://turbo.build/pack/docs/features/css#scss-and-less

"These are likely to be available via plugins in the future." Fantastic


As a performance nightmare you can't wake up from, SCSS can't die soon enough. As someone who depends on it currently though I def understand the frustration with losing support.


The new version (written in Dart of all things) seems pretty fast. The old ruby implementation was insanely slow, and the nodesass rewrite was insanely broken.

Whatever they got now though is just about perfect for me


Embedded SASS promises to be faster anyway, though still slow IMO..

There are some issues with realizing the potential though. Namely, a standard-ish CRA React app will have thousands of SCSS compilation entrypoints due to importing SCSS in individual component modules.

Lack of process reuse is causing many to see slightly SLOWER compile times. Next you have:

* Lack of tooling support: Vite?

* Need to migrate SCSS syntax

* ??

As soon as it's a free-ish upgrade with high ROI on swapping it in, I'll take it! I think SCSS is a dead-end though. Modern CSS + postcss for some CSS-Next polyfills is the way forward IMO.


Yeah, it really pains me that Sitecore is doubling down on them as the main FE framework.


> Looks impressive, but proof will be in the actual day to day dev experience and configuration, not the perf.

I think it really depends on the use case. I use Webpack, but it's all configured for me by create-react-app and I don't have to mess with it. If my configuration could automatically be ported from Webpack to Turbopack and my builds got faster, great :)

Of course, that's not the only use case and I agree that speed alone won't decide the winner.


    // @TODO implement and publish
   import { webpackConfigTranslator as translate } from ”turbopack-webpack-compat”;
   import webpackConfig from ”./webpack.config.js”;

   const turbopackConfig = translate(webpackConfig);

   export default turbopackConfig;
Any takers?


I don’t think any of the current options are good enough that I would want the community to settle on one of them at this point at the expense of any innovation or experimentation.


We’ve been doing JavaScript development for how many years now? And still no bundling options that are good enough? Or at least one that can continue to evolve vs. creating a successor, etc.


By my count its 23 years.

The biggest issue in FOSS is folk don't wanna join a "good enough" project and move it. Sometimes the project is contributor hostile (rare?)

And we end up with basically: "I'll build my own moonbase with blackjack and hookers!"

All that starting from scratch costs loads of impossible-to- recover-time.


That’s really not what’s going on here.

All the previous gen bundlers are written in JS and support a giant ecosystem of JS plugins. There’s no incremental way to migrate those projects to Rust. The benefit of these new bundlers is that they are literally full rewrites in a faster language without the cruft of a now mostly unnecessary plugin API.

And the “cost” of this? Some of these new bundlers are written by just 1 or 2 core contributors. Turns out with the benefit of hindsight you can make a much better implementation from scratch with far less resources than before.


Well, that's what is really frustrating here. Turbopack is built by the creator of Webpack. So, instead of fixing the bloated megalith that webpack has become, they are just moving on to greener pastures. But this time it'll be different™


Webpack 5 was that push to fix the monolith. I would guess that after that herculean upgrade effort that the creator of Webpack has a pretty good idea of what’s fixable and what’s an inherent limitation of the existing tool.


Microsoft has been able to evolve Windows and Office and SQL Server for decades, with huge customer bases…


I mean, there’s really only so much you can do. If major changes are required, you can either make a significant new version with massive compatibility issues, splitting the community (ala Python 2 & 3). And even then, you’re still building off of the old code.

Or start from scratch and make something better.

Either way, you split the community, but at least with the new tool you can (hopefully) manage to make it a lot better than a refactor would have been. (In some areas, anyways.)

Plus, this allows the old project to continue on without massive breaking changes, something its users probably appreciate. And this old project can still create new major versions if needed, which is something you don’t get if you have a major refactor because everything meaningful gets shipped to the refactored version.

So I think spinning off a new project is a net good here. It doesn’t impact webpack very much unless people ditch it and stop maintaining it (unlikely). It lets them iterate on new ideas without maintaining compatibility. (Good, when the big issue with webpack is its complexity.)

So if the idea turns out to be bad, we haven’t really lost anything.


> instead of fixing the bloated megalith that webpack has become

Megalith? Isn't it super modular and configurable?


Yeah. I’m consistently surprised and disappointed by how much resistance people have to getting their hands dirty digging through other people’s codebases. It’s fantastically useful, and a remarkable way to learn new approaches and techniques.

The world doesn’t need yet another JS bundler. It needs the one js bundler that works well with the wide variety of niche use cases.


I dig through other people’s codebases all day long. I really don’t want to do that in my free time as well. Especially to fix the garbage that they built in the first place.

It’s just not fun. And that’s one of the most important reasons I do this job.


Webpack was flexible and “good enough.” But it turns out rust-based bundlers are 10-100x faster and so the definition of “good enough” has changed, for good reason.

It’s hard to overstate how game changing this new wave of zero-config, ultrafast rust bundlers are for the day-to-day dev experience.


For me the age of JavaScript isn't particularly important. I just don't think any one of the well-known options are good enough that I would want the community to throw a big portion of support behind it.


That seems to be the chicken-and-egg problem here: the current problem is endless repetitions of "none of the existing options are good enough, let us build a new option". The call above is "just pick one and everyone work together to make it better", but we're back to "none of the existing options are good enough".


Specific to build tools, a number of projects are VC-funded. Rome raised millions of dollars in VC money (https://rome.tools/blog/announcing-rome-tools-inc/). This offering is now funded by Vercel.

The same problem plays out in the JS engine space (Deno raised $21M and Bun raised $7M) and in the framework space (e.g. Remix raised $3M). As long as there's money to be made and investors to fund projects, there won't be consolidation.


Fully agreed.

I would add that esbuild has set the bar very high for documentation and configuration. I come away very impressed every time I need to touch esbuild, which is not usually my expectation when it comes to the JS build-and-bundle ecosystem.

And while vite is still young, it does a good job integrating itself with many different JS stacks. The docs are also great, and the project has incredible momentum.

Between esbuild and vite, I feel pretty set already. Turbopack will need to be that much better. Right now, it doesn’t look like much more than an attempt to expand the turborepo business model. Let’s see where they take it.


> special snowflake build toolchains

That reminds me, wasn't there a build tool called Snowflake?

Oh, it was called Snowpack [1]. And it's no longer being actively maintained. Yeesh.

[1]: https://www.snowpack.dev/


My disappointment related to this is that I still think Snowpack's philosophy was the right one for JS present and especially for JS future: don't bundle for development at all because ESM support in today's browsers is great and you can't get better HMR than "nothing bundled"; don't bundle for Production unless you have to (and have performance data to back it up).

I know Vite inherited the first part, but I still mostly disagree with Vite on switching the default of that last part: it always bundles Production builds and switching that off is tough. Bundling is already starting to feel like YAGNI for small-to-medium sized websites and web applications between modern browser ESM support, modern browser "ESM preload scanners", and ancient browser caching behaviors, even without HTTP/2 and HTTP/3 further reducing connection overhead to almost none.

But a lot of projects haven't noticed yet because the webpack default, the Create-React-App default, the Vite default, etc is still "always bundle production builds" and it isn't yet as obvious that you may not need it and we could maybe move on away from bundlers again on the web.


Vite is driven by pragmatism. They are bundling for prod because it is pragmatic.

If they do not bundle for prod, then initial load will cause a flood of network request and cause slowness due to a lot of network requests


I think a lot of people assume this is the case for every website/web application without actually watching their network requests and having hard data on it. Like I said, small to medium sites and applications in some cases start to be more efficient with ESM modules than ever before (because it is the ultimate code-splitting) even before you get into new tools like HTTP2 and HTTP3 which significantly reduce connection overhead for network requests entirely and make a lot of old bundling advice outdated at best.

Plus, ESM modules always load asynchronously without DOM blocking so in any and all "progressive enhanced" sites and apps, it may feel slow to hydrate parts of it, but you generally aren't going to find a better performing experience while it hydrates. The page itself won't feel "slow" to the user because they can start reading/scrolling faster.

Some of it comes down to your framework, of course, and if you can afford SSR or are using a "progressive enhanced" style of site/application. For one specific counter-instance I'm aware of, Angular's NgModules dependency injection configuration mess routes around ESM's module loaders and creates giant spaghetti balls the browser thinks it needs to load all at once. I can't see recommending unbundling Angular apps any time soon as long as they continue to break the useful parts of ESM style loading with such spaghetti balls. Other frameworks, especially ones with good SSR support/progressive enhancement approaches are increasingly fine to start using unbundled ESM on the modern web.


I thought http2 made the network requests a non-issue (at least for 100 files or so). The only advantage of the larger bundle is though compression.


Even compression is maybe somewhat a non-issue with Brotli compression options (now supported by every major browser and often "required" for strong HTTP2+ support) and its hand-tuned dictionary for things like common JS delimiters/sequences. With Gzip you do sometimes need large bundles before its automatic dictionary management algorithms pick up things like that. In theory, Brotli always starts from a place of more "domain knowledge" and needs less "bulk" for good compression.


Have you ever tried to load a development build of a project with 3000 modules in Vite? It’s like a webpack build all over again every time you load the page.


I didn't have much of any trouble in Snowpack with even the largest projects I worked on but at least with Snowpack it was easy enough to target "freeze" a sub-tree of modules into Snowpack's "web_modules" with esbuild. I don't know about Vite in that case as I haven't been convinced to actually try Vite despite Snowpack's shutdown endorsement. Recent stuff for me has either been projects still using webpack for platform reasons or ESM with hand-targeted esbuild bundles for specific module trees (libraries) without using Vite or Snowpack above that. About the only thing I feel I'm missing in those esbuild workflows is a good HMR dev server but the simplicity of using fewer, more targeted tools feels handy to me.


The folks working on Snowpack didn't just give up. They transitioned over to working on Astro. It was a very positive move. I see great things coming out of that project as we move into a more "server rendered" future.


JS has 10x the amount of developers of all other languages combined, that translates to a lot of ideas on how to progress the different avenues of JS land.


It is also the primary language taught to bootcamp developers looking to get started, and so a lot of suggestions and ideas come from people without any real experience.


very much doubt that it's the bootcamp-devs-without-real-experience developping new-gen bundlers and transpilers


Indeed they do left pad instead.


Build tooling stability is one of the great undersold benefits of the ClojureScript ecosystem IMO.

The build process is defined by the compiler (using Google Closure tooling under the hood) and has not significantly changed since the introduction of ClojureScript in 2011.

Since all CLJS projects use the same compiler, the build process works the same everywhere, regardless of which actual build program is being used (Leiningen, shadow-cljs, etc).


And that's one of the reasons why people don't use clojurescript. Arbitrary npm package import and interop was not possible until late 2018 with shadow-cljs. Build tooling "stability" is only a thing if you believe in reinventing every wheel, not using third party dependencies, and pretending that the half-baked "clj" tool doesn't exist.


> Vite

On large applications Vite is fast to build, but loading your application in browser issues thousands of http requests, which is tragically slow.

Esbuild is basically instant to build, and instant to load on the same application. It’s a shame it doesn’t do hot reload.

If Turbopack can give us the best of both worlds, then that’s absolutely an improvement I want.


Yeah, working on app with >8k modules seeing >4s page loads on refresh. It's also weird how it wants to force you to not use deep imports on package deps, but then doesn't want you using index files in your own code..

I believe there are areas for improvement here that Vite can make though. They need to implement finer grain browser side module caching, dependency sub-graph invalidation(Merkel tree?), and figure out how to push manifests and sub-manifests to the browser so it can pro-actively load everything without needing to parse every module in import order making network requests as it goes..

Lots to do lol.


> I just wanna `script/build` or `yarn dev` and not think about it anymore.

Parcel might be a good fit for you: https://parceljs.org/


We've already migrated our old webpack rails front end to esbuild as of a couple months ago, and couldn't be happier.


Is rollup still a thing?


What do you mean by "still"? I just learned that it exists a month ago! I wish I was joking...

JS is not my main language, true, but damn it's hard to keep up. I think it took me less time to be productive with Typescript than with webpack.


Yeah I just learned about Vite a few days ago, and Turbopack today.

Apparently rollup is used under the hood by Vite, and the plugins and config syntax are compatible, but I couldn't get my rollup config to work with Vite.


Yes, e.g. Vite uses Rollup for production builds. Vite uses esbuild for development with plans to make it usable for production builds.


Rollup is used by some of these tools. For instance vite uses rollup for production builds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: