Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are the underlying assumptions we can break here?

For example, what if tasks were not monolithic? As the queue size grows, increase the caching timeout and don't hit that DB, or decrease timeout, or something like that.

Or, what if the input task variance was bounded and we just initialized the queue with 10 tasks? This way, the addition of tasks would never dip below 1 and would never exceed 20 (for example).



Some more:

Processing capacity might not be constant. If you're in a cloud, maybe you can launch more processing power as queue length increases.

Multiple items in the queue might be satisfiable by the same processing, e.g. multiple requests for the same item. In that case, having more requests in queue can increase processing efficiency.

Edit: another one. All requests may not need the same quality of service. For some, best effort might be acceptable.


I wonder if it might also be possible to introduce a kind of tiered storage, where, for example, the queue starts persisting to cheap, massive block storage (such as cloud block storage) instead of it's usual mechanism. That does imply tier 2 events would read slower when the busy period ceased though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: