You can use it for any task which takes too long to fit into a web request.
A simplistic example: You have a web form where a new user can choose a bunch of his/her interests or hobbies from a big list. The user clicks submit and then sees a list of other users on your site with similar interests.
For small numbers of users and a simplistic algorithm, this could probably be done right in the web request (i.e. in your Django view), but with millions of users and a complex recommendation algorithm, you could have Celery do the task in a thread separate from the web request.
You could then immediately forward the user to a page that says "Your recommendations are being calculated." The user can come back later and see if his/her recommendations are done yet. I assume you could even do some AJAX checks that would load in the recommendations once the Celery task is done, or even display a progress bar.
We use it extensively, essentially for anything that doesn't need to happen "just this second". It's also replaced everything we used to do in cron, as it easy to carry around in source control.
Most of our emails, calculating expensive things (we have some match-making algorithms that run as tasks), sending notifications and all sorts of fun things are all done in tasks. By moving as much of our code into celery as possible it's been much easier to scale (adding a new node is trivial) and our pages are much more responsive.
That Celery makes it all so very easy to do is icing on the cake.
I've set it up for a project where incoming HTTP requests can trigger outgoing HTTP requests, something I found useful was a wrapper function that attempts to execute the task asynchronously (the_task.apply_async) but falls back on running it in-process (the_task.apply). This way if RabbitMQ isn't running everything still basically works. The downside is that if RabbitMQ is running but celeryd isn't, all the tasks just pile up in the queue.