Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It still surprises me that NGINX beat out Apache so quickly even though Apache had way more modules and was/is entirely free vs. NGINX which is more or less "open core" with some nice features requiring commercial licensing.


On the other hand, the unreadable weird-ass pseudo-XML configuration files of Apache made anyone touching them wish for something better.

I also expect ngx_lua did a lot for adoption, the fact that you could always "shell out" to lua if you needed was a huge boon even just for peace of mind.


> On the other hand, the unreadable weird-ass pseudo-XML configuration files

If I have one gripe about NGINX it's that its configuration is a still-half-baked DSL that has quirks you wouldn't expect and when they error you don't get great feedback.

Examples: You can have an if clause, but no else attached. You can't have an if clause with multiple conditions. Finally, "if [might be] evil." 1

You end up writing a bunch of partitioned control flow statements and you're never really sure at what level of config hierarchy they would best be applied.

I love the product but Apache's XML versus NGINX's semi-declarative, hierarchical blocks aren't night and day better.

1 https://www.nginx.com/resources/wiki/start/topics/depth/ifis...


I agree with you completely. Nginx's config syntax is better than Apache's but it still feels like mystery meat. Can you use this directive or option within this block? Maybe, maybe not. If not, why? Who knows. It's just not allowed to use map within a location block and that's just how it is, okay?

My dream web server has Nginx's capabilities and Lighttpd's Lua configuration files/scripts. Is that what ngx_lua does? I've heard of it before but never really gave it a look.


With the rewrite and map blocks it is maybe a little easier for you to write fewer if statements…. https://stackoverflow.com/questions/47724946/nginx-rewrite-b...


Oh I've used these plenty but there are still conditionals which sometimes require or are most clearly defined with if statements, particularly complex redirects that rely on a number of individual conditions to be met.

I've seen these manifest in the wild as stuff like:

if ($thing ~* (match)) { $setWeirdVar = "Y"; } if ($otherThing = "value") { set $setValue "${setWeirdVar}E"; } if ($thirdCondition ~ (another|match)) { set $setValue "${setWeirdVar}S"; } if ($setValue = YES) { # do a thing here }

As clunky as that is, I've found it recommended in SO threads.


To be fair NGINX config is not better. An ad-hoc grown soup of syntax without a clear concept to govern it all.

I would prefer a simple JSON file any day. Or some Lispy S-expressions. Or some TOML or well structured XML and XSD even.

NGINX makes you learn another lang only for one tool and for a config, which mostly (always?) does not need anything more than being declarative config.


No JSON, please. You can't have comments. A JSON config would a deal-breaker for me to use a server.


Oh gosh, I had to try and figure out an Apache config file some time last year - it was a real slog trying to figure out what was happening thanks in no small part to the poor documentation of their pseudo-XML.


You can do similarly in Apache with the Perl sections...

    <Perl>
    # dynamic perl config goes here..
    </Perl>


It should be remembered that NGINX is used as a reverse proxy for a lot of servers behind the scenes. That NGINX is the web server identified up front doesn't mean as much as it might because of this architectural construct. I use NGINX to front a sites that have Apache on the back end and as a result, the Internet spiders think my websites are running NGINX rather than Apache. NGINX is incredibly easy to configure as a reverse proxy, image router, and SSL front-end. Thanks Igor.


> That NGINX is the web server identified up front doesn't mean as much as it might because of this architectural construct.

The exact same argument can be made to explain why nginx is undercounted. A lot of setups will run nginx behind proxies, so you'll count a proxy: a Varnish, a single nginx, cloudfront servers (are they running nginx?) while in reality there may be many nginx-es running.

Nonetheless: nginx is a gift and thanks go out to Igor, regardless of how good the spiders can count the number nginx instances.


Granted I've done exactly this before, but why put Nginx in front of Apache? In my experience it added headaches without any real benefit.

(Unless you don't mean Apache webserver but rather some other Apache product)


Nginx manages a ton of connections better and can serve static files very fast. It can then multiplex the dynamic requests into fewer connections to Apache. If you mean why not only use nginx, I would guess that's easier than changing your legacy systems to use nginx (e.g. if you have a ton of htaccess files). It's also possible you got better performance with mod_php although most people seem to claim that php-fpm with nginx is faster.


> If you mean why not only use nginx

Yes, this is what I meant. I originally wrote a SaaS app that was hosted through Apache and ended up putting NGINX on top of it for the aforementioned reasons. But eventually testing showed that removing Apache just made the whole thing a whole lot more manageable. I have friends with similar anecdotes. Just putting NGINX in front from the get-go would have saved a lot of tech debt.


2012-2015 I worked at a shared hosting company and towards the end of my tenure there we revamped the architecture to be centered on nginx (SSL termination, HTTP2 support, etc.) and invested quite a bit in API and GUI support for rewrite rules, redirects, etc.

However, for better or worse, a lot of the software people want to run on shared hosting come with a .htaccess file and documentation for how to configure it otherwise. So we gave customers a choice to put Apache behind nginx.

Unfortunately I left too early to learn what %age of customers ended up enabling Apache but they‘re still running this architecture today.


I have done this to host multiple services (running using multiple users and setups) from one host.


These days the cool kids love to call reverse proxies "load balancers" (when you have n>1 backends).


We added Nginx to our hosting environment in front of Apache and knew a bunch of other folks who did the same. The outwardly visible adoption of Nginx was not necessarily zero-sum with Apache’s footprint at first.

In my case we scaled Drupal and Wordpress sites by using Varnish as a reverse proxy cache in front of Apache. But then we wanted to go HTTPS across the board, which Varnish does not handle. So we terminated HTTPS in Nginx and then passed the connection back to the existing Varnish/Apache stack. I know other folks just skipped or ripped out the Varnish layer and used Nginx for both HTTPS and caching.

At the time both Drupal and Wordpress (and other popular PHP projects) depended on Apache-specific features for things like pretty URLs and even security. Over time, the community engineered away from those so there was little reason to prefer Apache anymore.


The web changed. We moved away from static HTML pages and CGI scripts to monolithic application servers in java, ruby, python, etc. Apache excelled with these static content sites and simple auth scenarios (remember .htaccess files?) but became painfully complex proxying application servers. Nginx was doing exactly what was needed at exactly the time it was needed.


And yet interestingly, nginx started in 2002, which was still old-school internet. So really, it was ahead of its time.


2002 was the start of the glory days of java web monoliths, like big monstrosities with spring, Rails, Django, etc. came a couple years later and monolith app servers really started to take off.


Painfully complex proxying? Can you explain? I still use Apache as my go to HTTP server and proxying is just 2 config lines.


Around here, Apache was heavily used for its mod_php. It could run php embedded without complex fcgi setup.

Then everyone moved to ruby and python (and also perl) and mod_php stopped being an advantage.


Everyone moved to Ruby and Python? In your bubble perhaps, but PHP is probably more popular than Ruby and Python combined globally.


Definitely not everyone, and you might be right based on actual number of websites, but the zeitgeist definitely moved to Rails and Django for a while.


Somehow I still see .htaccess files in projects that aren't that old (and in a few cases never used Apache).


Yes - this would be my take as well.


Back when the Apache was beaten there was no commercial licensing in Nginx.

Also the Apache that was beaten was Apache 1, which was fork-only, and that was the whole reason Nginx was written in the first place.

Then Apache did Apache2 with mpm modules and badly missed the mark. After that Apache was doomed. No async support == dead. It was that simple.


This jives with my memory of that time as well. Apache just couldn't keep up with Nginx' async speed, and if you weren't having to deal with PHP (before FastCGI's adoption), there was no real reason to use Apache.

And post-FCGI's adoption, you didn't need to use Apache, so... why use it?


mpm_event though from Apache 2.4 was async and kind of great.


I think the modules were Apache's curse, they made it possible to bring down Apache. Speed is great, but Always Responding is a more important feature. I'm sure most Nginx configurations could have been done with Apache without any real performance issue, but Apache hurt its own reputation by doing extra things poorly.


Nearly all the performance reviews between the stock Apache and Nginx at their hype time were equal to compare Word vs Notepad. An Apache installed from distribution package (with their range of enabled modules) and an Nginx compiled from source without nothing. A vanilla and good built Apache it's perfectly fine for realworld use, at the same level than Nginx, because when you are close to the limits of this pieces of software, your scalability problems are in other place.


I'm reminded of how Linux beat GNU Hurd, or how systemd is slowly replacing SysVinit. Highly modular systems often lose out to more monolithic ones, since they tend to be slower, more complex, and harder to use in practice, despite their theoretical advantages.


At the time time there was no commercial Nginx, only open source. Also, Apache was a huge pain to configure for anything other than configuring static files. Nginx config was a delight to deal with by comparison.


Yes - this. Building my first web site (we didn't call them apps back then) and wrangling with Apache and OpenSSL to enable encryption was ... not fun.


Tried Caddy yet? They provide really compact configuration templates and if needed can be reconfigured using the API.


For me, the simplicity of Nginx is what beats it out over Apache.

I've always felt like Nginx "just works" by default and creating configurations is relatively easy.


Yeah I remember starting a project with Apache in 2017 I think, and when I was discussing the (very quick) move to nginx it appeared that Apache's default settings are great for a personal home page and not much else, while nginx's default settings seem to handle a moderately busy e-commerce site (or more) with no trouble at all.


To me, it coincided with async (long polling/comet/SSE), more live, web applications. Apache had a horrible story around this, with one thread per connection (I believe Apache 2 may have had an optional execution model, which was also uncomfortable for some reason).

I used lighttpd for this, mentioned in another thread, rather than nginx, which was a similar breath of fresh air coming from Apache -- not only for the event loop model built around epoll and friends, but also the configuration and general deployment.


Back in 2005-6 , nginx was so far ahead a generation of engineers adopted it… it’s use of signals to zero downtime upgrade (USR2) - still is one of the best features few other servers get right.

The syntax to configure is clear enough while not being super verbose…


These days systemd with its file descriptor store makes implementing live update of a service without dropping a single connection rather straightforward. But Nginx managed to do that on its own long time before systemd.


Nginx always worked better and didn’t need to be tuned like Apache until you got to really enormous scales which were rare while even a little load on apache would require tweaking settings and experimentation.


I remember in the early 00s WAMP/LAMP was the stack of choice in getting quickly setup for writing web applications, but configuration was often painful, especially on the Apache side. At that time I was working on hobby projects like one private server I used to administer. When Rails came out it was just a breath of fresh air compared to PHP and I distinctly remember switching to that. NGINX was also picking up steam at that time as well


Convenience is often worth a lot more than the ultimate in flexibility.

This is why email is now more or less the domain of a couple of very large companies.


Which modules do you miss in nginx that are free in Apache?


I don't know how surprising it is, considering ease of use and "just works" beats all other considerations every time. If it didn't, we'd still all be using Novell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: