Sampling ​
Logly.zig provides a sophisticated sampling system for controlling log volume in high-throughput scenarios. Sample logs by probability, rate limits, or every-Nth message.
Overview ​
The Sampler module helps you:
- Reduce log volume while maintaining statistical representation
- Implement rate limiting to prevent log flooding
- Sample every Nth message for consistent reduction
- Use adaptive sampling based on system load
Basic Usage ​
zig
const std = @import("std");
const logly = @import("logly");
const Sampler = logly.Sampler;
const Config = logly.Config;
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Create a 50% probability sampler
var sampler = Sampler.init(allocator, .{ .probability = 0.5 });
defer sampler.deinit();
// Check if a log should be sampled
for (0..100) |i| {
if (sampler.shouldSample()) {
std.debug.print("Log message {d}\n", .{i});
}
}
// Approximately 50% of messages will be logged
}Sampler Presets ​
Logly-Zig provides convenient presets for common scenarios:
zig
const SamplerPresets = logly.SamplerPresets;
// No sampling (100% of messages pass through)
var none = SamplerPresets.none(allocator);
defer none.deinit();
// 10% probability sampling
var sample10 = SamplerPresets.sample10Percent(allocator);
defer sample10.deinit();
// Rate limit: 100 messages per second
var rate100 = SamplerPresets.limit100PerSecond(allocator);
defer rate100.deinit();
// Every 10th message passes through
var every10 = SamplerPresets.every10th(allocator);
defer every10.deinit();
// Adaptive: targets 1000 messages per second
var adaptive = SamplerPresets.adaptive1000PerSecond(allocator);
defer adaptive.deinit();Sampling Strategies ​
Probability Sampling ​
Sample a percentage of messages randomly:
zig
// 25% of messages will pass through
var sampler = Sampler.init(allocator, .{ .probability = 0.25 });
defer sampler.deinit();
for (0..1000) |_| {
if (sampler.shouldSample()) {
// Approximately 250 messages will reach here
}
}Rate Limiting ​
Limit to a maximum number of messages per time window:
zig
// Allow 100 messages per 1000ms window
var sampler = Sampler.init(allocator, .{ .rate_limit = .{
.max_records = 100,
.window_ms = 1000,
}});
defer sampler.deinit();
// First 100 calls in each second pass, rest are droppedEvery-Nth Sampling ​
Keep every Nth message:
zig
// Keep every 10th message
var sampler = Sampler.init(allocator, .{ .every_n = 10 });
defer sampler.deinit();
// Messages 10, 20, 30, 40, etc. will pass throughAdaptive Sampling ​
Automatically adjust sampling rate based on throughput:
zig
var sampler = Sampler.init(allocator, .{ .adaptive = .{
.target_rate = 1000, // Target 1000 msgs/sec
.min_sample_rate = 0.01, // Never below 1%
.max_sample_rate = 1.0, // Up to 100%
.adjustment_interval_ms = 1000, // Adjust every second
}});
defer sampler.deinit();Monitoring and Callbacks ​
You can register callbacks to monitor sampling decisions and rate adjustments:
zig
fn onReject(rate: f64, reason: Sampler.SampleRejectReason) void {
// Log rejection or update metrics
}
fn onRateExceeded(count: u32, max: u32) void {
// Handle rate limit exceeded
}
sampler.setRejectCallback(onReject);
sampler.setRateLimitCallback(onRateExceeded);Sampler Statistics ​
Track sampling performance using thread-safe statistics:
zig
var sampler = Sampler.init(allocator, .{ .probability = 0.5 });
defer sampler.deinit();
// Sample some logs
for (0..100) |_| {
_ = sampler.shouldSample();
}
// Get statistics
const stats = sampler.getStats();
std.debug.print("Total records: {d}\n", .{stats.total_records_sampled.load(.monotonic)});
std.debug.print("Accepted: {d}\n", .{stats.records_accepted.load(.monotonic)});
std.debug.print("Rejected: {d}\n", .{stats.records_rejected.load(.monotonic)});
std.debug.print("Accept Rate: {d:.2}%\n", .{stats.getAcceptRate() * 100});
// Get current sampling rate
const rate = sampler.getCurrentRate();
std.debug.print("Current Sampling Probability: {d:.2}%\n", .{rate * 100});
// Reset statistics
sampler.reset();Production Example ​
zig
const std = @import("std");
const logly = @import("logly");
const Sampler = logly.Sampler;
const SamplerPresets = logly.SamplerPresets;
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
var logger = try logly.Logger.init(allocator);
defer logger.deinit();
// Adaptive sampling for production - targets 1000/sec
var sampler = SamplerPresets.adaptive1000PerSecond(allocator);
defer sampler.deinit();
// High-volume logging with sampling
var i: usize = 0;
while (i < 10000) : (i += 1) {
// Sample debug/info logs
if (sampler.shouldSample()) {
try logger.infof("Processing item {d}", .{i});
}
// Never sample errors - always log them
// try logger.err("Error if needed");
}
}Best Practices ​
- Never sample errors/critical: Always log 100% of error-level logs
- Start conservative: Begin with higher sampling rates, reduce as needed
- Use adaptive for variable loads: Handles traffic spikes automatically
- Monitor statistics: Use
getStats()to track sampling effectiveness - Test sampling rates: Verify you can still debug issues with sampled logs
See Also ​
- Filtering - Rule-based log filtering
- Metrics - Logging metrics collection
- Configuration - Global configuration options
New Presets (v0.0.9) ​
zig
const SamplerPresets = logly.SamplerPresets;
// Additional probability presets
var sample_50 = SamplerPresets.sample50Percent(allocator);
var sample_1 = SamplerPresets.sample1Percent(allocator);
// Additional rate limit presets
var limit_10 = SamplerPresets.limit10PerSecond(allocator);
var limit_1000 = SamplerPresets.limit1000PerSecond(allocator);
// Additional every-n presets
var every_5 = SamplerPresets.every5th(allocator);
var every_100 = SamplerPresets.every100th(allocator);
// Adaptive sampling presets
var adaptive_100 = SamplerPresets.adaptive100PerSecond(allocator);Aliases ​
| Alias | Method |
|---|---|
sample | shouldSample |
check | shouldSample |
allow | shouldSample |
statistics | getStats |
stats_ | getStats |
rate | getCurrentRate |
