For interval-oriented data that you want to query at different levels of detail with a query window, there's a trick I used in a commercial profiler which you can easily bolt on top of any ordered search tree or database system.
You have a separate tree for each power-of-two scale. The tree corresponding to scale k contains intervals with length between 2^k and 2^(k+1). An interval [u, v] is put in the tree corresponding to scale lg(v - u) under the key u. If you have a query window [a, b] you turn it into a range query a - 2^k <= u <= b for each relevant scale k and filter matches by interval overlap between [u, v] and [a, b]. "Each relevant scale" is defined by the desired level of detail, e.g. you don't query scales which are too small relative to the current zoom level.
Each scale actually has two trees. One type is what I just described. The other type is a summary tree which contains bounding intervals with recursive summaries of all intervals at finer scales. When you cut off queries at scale k you also include data from the summary tree at scale k so that all data in the query window is returned, either verbatim or as part of the bounding intervals with summaries.
An efficient, convenient way of implementing this is to stream the time series data to disk to separate append-only files for each scale. You can start doing interval queries immediately with a reasonable slowdown by directly doing binary search on the raw scale-segregated files (if the records are variable length you can use a COBS-style encoding so you can sync to the nearest record from a random point in the file). That said, the write amplification for a high-fanout B+ tree index is tiny here and building it in a streaming fashion is basically free, so generating the index synchronously is my recommendation. You can also generate the summary files in a streaming fashion but since they involve significant write amplification it's worth decoupling that task to a process with lower IO priority.
To implement the interval query for [a, b] at a given scale you use the B+ tree index to find the starting position in the append-only record file for that scale using the a - 2^k lower bound and just do a linear scan in the file until you exceed b.
There's a bunch of variations and optimizations (e.g. coarser scales than powers of two) but that's the gist of it.
You have a separate tree for each power-of-two scale. The tree corresponding to scale k contains intervals with length between 2^k and 2^(k+1). An interval [u, v] is put in the tree corresponding to scale lg(v - u) under the key u. If you have a query window [a, b] you turn it into a range query a - 2^k <= u <= b for each relevant scale k and filter matches by interval overlap between [u, v] and [a, b]. "Each relevant scale" is defined by the desired level of detail, e.g. you don't query scales which are too small relative to the current zoom level.
Each scale actually has two trees. One type is what I just described. The other type is a summary tree which contains bounding intervals with recursive summaries of all intervals at finer scales. When you cut off queries at scale k you also include data from the summary tree at scale k so that all data in the query window is returned, either verbatim or as part of the bounding intervals with summaries.
An efficient, convenient way of implementing this is to stream the time series data to disk to separate append-only files for each scale. You can start doing interval queries immediately with a reasonable slowdown by directly doing binary search on the raw scale-segregated files (if the records are variable length you can use a COBS-style encoding so you can sync to the nearest record from a random point in the file). That said, the write amplification for a high-fanout B+ tree index is tiny here and building it in a streaming fashion is basically free, so generating the index synchronously is my recommendation. You can also generate the summary files in a streaming fashion but since they involve significant write amplification it's worth decoupling that task to a process with lower IO priority.
To implement the interval query for [a, b] at a given scale you use the B+ tree index to find the starting position in the append-only record file for that scale using the a - 2^k lower bound and just do a linear scan in the file until you exceed b.
There's a bunch of variations and optimizations (e.g. coarser scales than powers of two) but that's the gist of it.