Reservoir sampling is a family of randomized algorithms for choosing a simple random sample, without replacement, of k items from a population of unknown size n in a single pass over the items. The size of the population n is not known to the algorithm and is typically too large for all n items to fit into main memory. The population is revealed to the algorithm over time, and the algorithm cannot look back at previous items. At any point, the current state of the algorithm must permit extraction of a simple random sample without replacement of size k over the part of the population seen so far.
While Algorithm R is the simplest and most commonly used, there are other variants that improve performance in specific cases:
| Algorithm | Description | Complexity |
|---|---|---|
| Algorithm R | Basic reservoir sampling, replaces elements with probability k/i |
O(N) |
| Algorithm L | Optimized for large N, reduces replacements via skipping |
O(N), fewer iterations |
| Weighted Reservoir Sampling | Assigns elements weights, prioritizing selection based on weight | O(N log k) (heap-based) |
| Random Sort Reservoir Sampling | Uses a min-heap priority queue, selecting k elements with highest random priority scores |
O(N log k) |
Weighted Reservoir Sampling is an efficient algorithm for selecting k elements proportionally to their weights from a stream of unknown length N, using only O(k) memory.
This repository implements Weighted Algorithm R, an extension of Jeffrey Vitter’s Algorithm R, which allows weighted sampling using a heap-based approach.
This algorithm uses a min-heap-based priority selection, ensuring O(N log k) time complexity, making it efficient for large streaming datasets.
We need to select k elements from a data stream of unknown length N, ensuring each element is selected with a probability proportional to its weight w_i.
k
k elements with their priority scores:
[
$p_i = \frac{w_i}{U_i}$
]
where ( $U_i$ ) is a uniform random number from (0,1].i > k)
s_i:
p_i is greater than the smallest priority in the heap, replace the smallest element.N elements, the reservoir will contain k elements selected proportionally to their weights.For any element ( $s_i$ ) with weight ( $w_i$ ):
The priority score is: [ $p_i = \frac{w_i}{U_i}$ ] where ( $U_i \sim U(0,1]$ ).
The probability that s_i is among the top k elements:
[
$P(s_i \text{ is selected}) \propto w_i$
]
meaning elements with higher weights are more likely to be selected.
✅ Conclusion: Weighted Algorithm R correctly samples elements proportionally to their weights, unlike uniform Algorithm R.
To validate Weighted Algorithm R, we must check if:
For T independent runs:
count(s_i) be the number of times s_i appears in the reservoir.T runs:
[
$\text{Expected count}(s_i) = T \times \frac{w_i}{\sum w_j}$
]count(s_i) is statistically close to this value.Reservoir Sampling is a technique for randomly selecting k elements from a stream of unknown length N.
Algorithm L, introduced by Jeffrey Vitter (1985), improves upon traditional methods by using an optimized skipping approach, significantly reducing the number of random number calls.
We need to select k elements from a data stream of unknown length N, ensuring each element has an equal probability k/N of being chosen.
k elements.Initialize weight factor W using:
$W = \exp\left(\frac{\log(\text{random}())}{k}\right)$
Skip elements efficiently using the geometric formula:
$\text{skip} = \lfloor \frac{\log(\text{random}())}{\log(1 - W)} \rfloor$
Update W for the next iteration using:
$W = W \times \exp\left(\frac{\log(\text{random}())}{k}\right)$
For each element ( $s_i$ ), we show that it has an equal probability of being selected:
The probability that ( $s_i$ ) reaches the selection process:
$P(s_i \text{ is considered}) = \frac{k}{i}$
The probability that ( $s_i$ ) remains in the reservoir is:
$P(s_i \text{ in final reservoir}) = \frac{k}{N}, \quad \forall i \in {1, …, N}$
This confirms that Algorithm L ensures uniform selection.
To validate Algorithm L, we must check if:
k/N.For T independent runs:
count(s_i) be the number of times s_i appears in the reservoir.Expected probability:
$P(s_i) = \frac{k}{N}$
Expected occurrence over T runs:
$\text{Expected count}(s_i) = T \times \frac{k}{N}$
count(s_i) is statistically close to this value.