Scheduler API ​
The scheduler module provides automatic log maintenance with scheduled cleanup, compression, rotation, and custom tasks. It runs in the background using either a dedicated worker thread or by submitting tasks to a shared ThreadPool.
Quick Reference: Method Aliases ​
| Full Method | Alias(es) | Description |
|---|---|---|
init() | create() | Initialize scheduler |
initFromConfig() | fromConfig() | Initialize from config |
deinit() | destroy() | Deinitialize scheduler |
setHealthCallback() | onHealth() | Set health callback |
setMetricsCallback() | onMetrics() | Set metrics callback |
getHealthStatus() | healthStatus() | Get health status |
getMetrics() | snapshot() | Get metrics snapshot |
addTask() | add() | Add a task |
setTaskPriority() | setPriority() | Set task priority |
setTaskRetryPolicy() | retry() | Set task retry policy |
setTaskDependency() | dependsOn() | Set task dependency |
addCleanupTask() | cleanup() | Add cleanup task |
addCompressionTask() | compress() | Add compression task |
addCustomTask() | custom() | Add custom task |
setTaskEnabled() | enable() | Enable/disable task |
removeTask() | remove() | Remove task |
taskIndexByName() | indexOfTask() | Find task index by name |
getTaskSnapshot() | snapshotTask() | Get immutable task snapshot by index |
getTaskSnapshotByName() | snapshotTaskByName() | Get immutable task snapshot by name |
hasTaskNamed() | hasTask() | Check task existence by name |
setTaskEnabledByName() | enableByName() | Enable/disable task by name |
removeTaskByName() | removeNamed() | Remove task by name |
enabledTaskCount() | enabledCount() | Count enabled tasks |
runningTaskCount() | runningCount() | Count running tasks |
readyTaskCount() | readyCount() | Count tasks ready to run |
nextRunInMs() | nextRunMs() | Get ms until task next run |
nextRunInMsByName() | nextRunForTask() | Get ms until task next run by name |
setTaskSchedule() | updateSchedule() | Update task schedule |
rescheduleNow() | runSoon() | Force task runnable immediately |
setTaskStartedCallback() | onStarted() | Set task started callback |
setTaskCompletedCallback() | onCompleted() | Set task completed callback |
setTaskErrorCallback() | onError() | Set task error callback |
setScheduleTickCallback() | onTick() | Set tick callback |
setHealthCheckCallback() | onHealthCheck() | Set health check callback |
runNow() | run() | Run task immediately |
runNowByName() | runNamed() | Run task immediately by name |
runPending() | pending() | Run pending tasks |
start() | begin() | Start scheduler |
stop() | end(), halt() | Stop scheduler |
getStats() | statistics() | Get scheduler statistics |
resetStats() | clearStats() | Reset statistics |
setTelemetry() | trace() | Set telemetry |
clearTelemetry() | noTelemetry() | Clear telemetry |
getTasks() | tasks_() | Get tasks |
taskCount() | count() | Get task count |
isRunning() | running_() | Check if running |
hasTasks() | has_() | Check if has tasks |
Overview ​
const logly = @import("logly");
const Scheduler = logly.Scheduler;
const SchedulerPresets = logly.SchedulerPresets;Centralized Configuration ​
Scheduler can be enabled through the central Config struct:
var config = logly.Config.default();
config.scheduler = .{
.enabled = true,
.cleanup_max_age_days = 14,
.max_files = 100,
.compress_before_cleanup = true,
.file_pattern = "*.log",
};
const logger = try logly.Logger.initWithConfig(allocator, config);Or use the fluent API:
const config = logly.Config.default().withScheduler(.{});Types ​
Scheduler ​
The main scheduler struct for managing scheduled tasks with optional telemetry integration.
pub const Scheduler = struct {
allocator: std.mem.Allocator,
tasks: std.ArrayList(ScheduledTask),
stats: SchedulerStats,
compression: Compression, // Integrated compression support
running: std.atomic.Value(bool),
worker_thread: ?std.Thread,
telemetry: ?*Telemetry, // Optional telemetry for distributed tracing
};SchedulerConfig (Centralized) ​
Configuration available through Config.SchedulerConfig:
pub const SchedulerConfig = struct {
/// Enable the scheduler.
enabled: bool = false,
/// Default cleanup max age in days.
cleanup_max_age_days: u64 = 7,
/// Default max files to keep.
max_files: ?usize = null,
/// Enable compression before cleanup.
compress_before_cleanup: bool = false,
/// Default file pattern for cleanup.
file_pattern: []const u8 = "*.log",
/// Root directory for compressed/archived files.
archive_root_dir: ?[]const u8 = null,
/// Create date-based subdirectories (YYYY/MM/DD).
create_date_subdirs: bool = false,
/// Compression algorithm for scheduled compression tasks.
compression_algorithm: CompressionConfig.CompressionAlgorithm = .gzip,
/// Compression level for scheduled tasks.
compression_level: CompressionConfig.CompressionLevel = .default,
/// Keep original files after scheduled compression.
keep_originals: bool = false,
/// Custom prefix for archived file names.
archive_file_prefix: ?[]const u8 = null,
/// Custom suffix for archived file names.
archive_file_suffix: ?[]const u8 = null,
/// Preserve directory structure in archive root.
preserve_dir_structure: bool = true,
/// Delete empty directories after cleanup.
clean_empty_dirs: bool = false,
/// Minimum file age in days before compression.
min_age_days_for_compression: u64 = 1,
/// Maximum concurrent compression tasks.
max_concurrent_compressions: usize = 2,
};SchedulerConfig Field Reference ​
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable the scheduler |
cleanup_max_age_days | u64 | 7 | Max age for cleanup tasks |
max_files | ?usize | null | Max files to retain |
compress_before_cleanup | bool | false | Compress before deleting |
file_pattern | []const u8 | "*.log" | File pattern for tasks |
archive_root_dir | ?[]const u8 | null | Centralized archive location |
create_date_subdirs | bool | false | Create YYYY/MM/DD subdirs |
compression_algorithm | CompressionAlgorithm | .gzip | Algorithm for compression (gzip, zlib, deflate, zstd, lzma, lzma2, xz, zip, tar.gz, lz4) |
compression_level | CompressionLevel | .default | Compression level |
keep_originals | bool | false | Keep originals after compression |
archive_file_prefix | ?[]const u8 | null | Prefix for archived files |
archive_file_suffix | ?[]const u8 | null | Suffix for archived files |
preserve_dir_structure | bool | true | Keep directory structure |
clean_empty_dirs | bool | false | Remove empty directories |
min_age_days_for_compression | u64 | 1 | Min age before compression |
max_concurrent_compressions | usize | 2 | Max parallel compressions |
Note: v0.1.6 expanded compression and archiving support — including LZMA, LZMA2, XZ, TAR.GZ, ZIP, and LZ4 — and added helper utilities (e.g.,
Utils.getCompressionExtension()) and factory presets to simplify usage. Use thecompression_algorithmandcompression_levelfields (or theCompressionfactory methods) to select the appropriate algorithm and extension for your scheduled compression tasks.
ScheduledTask ​
A scheduled task configuration.
pub const ScheduledTask = struct {
/// Unique task name
name: []const u8,
/// Task type
task_type: TaskType,
/// Schedule configuration
schedule: Schedule,
/// Task-specific configuration
config: TaskConfig,
/// Task execution callback (for custom tasks)
callback: ?*const fn (*ScheduledTask) anyerror!void = null,
/// Whether task is enabled
enabled: bool = true,
/// Whether task is currently running
running: bool = false,
/// Task execution priority
priority: Priority = .normal,
/// Retry policy for failed tasks
retry_policy: RetryPolicy = .{},
/// Name of another task that must complete successfully before this one runs
depends_on: ?[]const u8 = null,
/// Last execution timestamp
last_run: i64 = 0,
/// Next scheduled execution
next_run: i64 = 0,
/// Number of executions
run_count: u64 = 0,
/// Number of failures
error_count: u64 = 0,
/// Retries remaining for current failure
retries_remaining: u32 = 0,
pub const Priority = enum {
low,
normal,
high,
critical,
};
pub const RetryPolicy = struct {
max_retries: u32 = 3,
interval_ms: u32 = 5000,
backoff_multiplier: f32 = 1.5,
};
};TaskType ​
Types of scheduled tasks.
pub const TaskType = enum {
/// Clean up old log files
cleanup,
/// Rotate log files
rotation,
/// Compress log files
compression,
/// Flush all buffers
flush,
/// Custom user-defined task
custom,
/// Health check
health_check,
/// Metrics collection
metrics_snapshot,
};TaskConfig ​
Configuration specific to tasks.
pub const TaskConfig = struct {
/// Path for file-based tasks
path: ?[]const u8 = null,
/// Maximum age in seconds for cleanup
max_age_seconds: u64 = 7 * 24 * 60 * 60,
/// Maximum files to keep
max_files: ?usize = null,
/// Maximum total size in bytes
max_total_size: ?u64 = null,
/// Minimum age in seconds (useful for compression)
min_age_seconds: u64 = 0,
/// File pattern to match (e.g., "*.log")
file_pattern: ?[]const u8 = null,
/// Compress files before cleanup (compress then delete)
compress_before_delete: bool = false,
/// Compress files and keep both original and compressed (archive mode)
compress_and_keep: bool = false,
/// Only compress files, don't delete any (pure archival)
compress_only: bool = false,
/// Skip files that are already compressed (.gz, .lgz, .zst)
skip_already_compressed: bool = true,
/// Recursive directory processing
recursive: bool = false,
/// Trigger task only if disk usage exceeds this percentage (0-100)
trigger_disk_usage_percent: ?u8 = null,
/// Required free space in bytes before running task
min_free_space_bytes: ?u64 = null,
};TaskConfig Compression Modes ​
| Field | Behavior |
|---|---|
compress_before_delete | Compress file, then delete original |
compress_and_keep | Compress file, keep both versions |
compress_only | Compress file, never delete anything |
Schedule ​
Schedule configuration.
pub const Schedule = union(enum) {
/// Run once after delay (in milliseconds)
once: u64,
/// Run at fixed intervals (in milliseconds)
interval: u64,
/// Run at specific time of day
daily: DailySchedule,
/// Cron-like schedule
cron: CronSchedule,
pub const DailySchedule = struct {
hour: u8 = 0,
minute: u8 = 0,
};
pub const CronSchedule = struct {
minute: ?u8 = null,
hour: ?u8 = null,
day_of_month: ?u8 = null,
month: ?u8 = null,
day_of_week: ?u8 = null,
};
};SchedulerStats ​
Thread-safe statistics for scheduled operations using atomic counters. Works correctly on both 32-bit and 64-bit architectures.
pub const SchedulerStats = struct {
/// Total tasks executed successfully (atomic).
tasks_executed: std.atomic.Value(Constants.AtomicUnsigned),
/// Total tasks that failed (atomic).
tasks_failed: std.atomic.Value(Constants.AtomicUnsigned),
/// Total files cleaned up (atomic).
files_cleaned: std.atomic.Value(Constants.AtomicUnsigned),
/// Total files compressed (atomic).
files_compressed: std.atomic.Value(Constants.AtomicUnsigned),
/// Total bytes freed by cleanup operations (atomic).
bytes_freed: std.atomic.Value(Constants.AtomicUnsigned),
/// Total bytes saved by compression (atomic).
bytes_saved: std.atomic.Value(Constants.AtomicUnsigned),
/// Last run time in milliseconds (atomic).
last_run_time: std.atomic.Value(i64),
/// Scheduler start time for uptime calculation (atomic).
start_time: std.atomic.Value(i64),
};SchedulerStats Helper Methods ​
| Method | Return Type | Description |
|---|---|---|
successRate() | f64 | Task success rate (0.0 - 1.0) |
failureRate() | f64 | Task failure rate (0.0 - 1.0) |
hasFailures() | bool | Returns true if any tasks have failed |
getExecuted() | u64 | Total tasks executed as u64 |
getFailed() | u64 | Total tasks failed as u64 |
getFilesCleaned() | u64 | Total files cleaned as u64 |
getFilesCompressed() | u64 | Total files compressed as u64 |
getBytesFreed() | u64 | Total bytes freed as u64 |
getBytesSaved() | u64 | Total bytes saved by compression as u64 |
uptimeSeconds() | i64 | Uptime in seconds since scheduler started |
tasksPerHour() | f64 | Average tasks executed per hour |
compressionRatio() | f64 | Compression savings ratio (bytes saved / total) |
Usage Example ​
const stats = scheduler.getStats();
// Check success/failure rates
const success = stats.successRate(); // e.g., 0.95 (95%)
const failure = stats.failureRate(); // e.g., 0.05 (5%)
// Check for failures
if (stats.hasFailures()) {
std.log.warn("Scheduler has {d} failed tasks", .{stats.getFailed()});
}
// Get uptime info
const uptime = stats.uptimeSeconds();
const rate = stats.tasksPerHour();
std.log.info("Running for {d}s, {d:.2} tasks/hour", .{uptime, rate});
// Check cleanup efficiency
const freed = stats.getBytesFreed();
const saved = stats.getBytesSaved();
const ratio = stats.compressionRatio();
std.log.info("Freed {d} bytes, saved {d} bytes ({d:.1}% compression)", .{
freed, saved, ratio * 100.0
});Methods ​
init ​
Create a new scheduler.
Alias: create
pub fn init(allocator: std.mem.Allocator) !*SchedulerinitWithThreadPool ​
Create a new scheduler that uses a thread pool for task execution.
pub fn initWithThreadPool(allocator: std.mem.Allocator, thread_pool: *ThreadPool) !*SchedulerParameters:
allocator: Memory allocatorthread_pool: Shared thread pool instance
initFromConfig ​
Create a scheduler from global configuration.
pub fn initFromConfig(allocator: std.mem.Allocator, config: SchedulerConfig, logs_path: ?[]const u8) !*Schedulerdeinit ​
Clean up resources and stop the scheduler.
pub fn deinit(self: *Scheduler) voidstart ​
Start the scheduler worker thread.
pub fn start(self: *Scheduler) !voidstop ​
Stop the scheduler gracefully, waiting for pending tasks to complete (with timeout).
pub fn stop(self: *Scheduler) voidaddTask ​
Add a scheduled task.
pub fn addTask(self: *Scheduler, name: []const u8, task_type: TaskType, schedule: Schedule, config: ScheduledTask.TaskConfig) !usizeParameters:
name: Unique task identifiertask_type: Type of taskschedule: Execution scheduleconfig: Task configuration
Returns: Index of the added task
setTaskPriority ​
Set the execution priority for a task.
pub fn setTaskPriority(self: *Scheduler, index: usize, priority: ScheduledTask.Priority) voidsetTaskRetryPolicy ​
Configure retry behavior for a task.
pub fn setTaskRetryPolicy(self: *Scheduler, index: usize, policy: ScheduledTask.RetryPolicy) voidsetTaskDependency ​
Set a dependency for a task (it will only run if the dependency is running).
pub fn setTaskDependency(self: *Scheduler, index: usize, dependency_name: []const u8) !voidtaskIndexByName ​
Find a task index by its name.
pub fn taskIndexByName(self: *Scheduler, name: []const u8) ?usizegetTaskSnapshot ​
Returns immutable task state snapshot by task index.
pub fn getTaskSnapshot(self: *Scheduler, index: usize) ?TaskSnapshotgetTaskSnapshotByName ​
Returns immutable task state snapshot by task name.
pub fn getTaskSnapshotByName(self: *Scheduler, name: []const u8) ?TaskSnapshothasTaskNamed ​
Check whether a task with this name exists.
pub fn hasTaskNamed(self: *Scheduler, name: []const u8) boolsetTaskEnabledByName ​
Enable or disable task by task name.
pub fn setTaskEnabledByName(self: *Scheduler, name: []const u8, enabled: bool) boolReturns true when task exists and was updated.
removeTaskByName ​
Remove task by name.
pub fn removeTaskByName(self: *Scheduler, name: []const u8) boolReturns true when task existed and was removed.
enabledTaskCount ​
Get number of enabled tasks.
pub fn enabledTaskCount(self: *Scheduler) usizerunningTaskCount ​
Get number of tasks currently running.
pub fn runningTaskCount(self: *Scheduler) usizereadyTaskCount ​
Get number of tasks currently ready to run (time/dependency/resource checks passed).
pub fn readyTaskCount(self: *Scheduler) usizenextRunInMs ​
Get remaining milliseconds until next run for a task index.
pub fn nextRunInMs(self: *Scheduler, index: usize) ?i64nextRunInMsByName ​
Get remaining milliseconds until next run by task name.
pub fn nextRunInMsByName(self: *Scheduler, name: []const u8) ?i64setTaskSchedule ​
Update schedule for a task and recalculate next-run timestamp.
pub fn setTaskSchedule(self: *Scheduler, index: usize, schedule: Schedule) boolrescheduleNow ​
Force a task to become runnable on next scheduler pass.
pub fn rescheduleNow(self: *Scheduler, index: usize) boolgetDiskUsage ​
Get current disk usage percentage for a path.
pub fn getDiskUsage(self: *Scheduler, path: []const u8) !u8getFreeSpace ​
Get free space in bytes for a path.
pub fn getFreeSpace(self: *Scheduler, path: []const u8) !u64removeTask ​
Remove a task by index.
pub fn removeTask(self: *Scheduler, index: usize) !voidenableTask ​
Enable a task by index.
pub fn enableTask(self: *Scheduler, index: usize) voiddisableTask ​
Disable a task by index.
pub fn disableTask(self: *Scheduler, index: usize) voidrunTaskNow ​
Execute a task immediately.
pub fn runTaskNow(self: *Scheduler, index: usize) !voidrunNowByName ​
Run task immediately by task name.
pub fn runNowByName(self: *Scheduler, name: []const u8) !boolReturns true when task exists and was executed.
getStats ​
Get current scheduler statistics.
pub fn getStats(self: *const Scheduler) SchedulerStatsresetStats ​
Reset all scheduler statistics to zero.
pub fn resetStats(self: *Scheduler) voidsetTelemetry ​
Set the telemetry instance for distributed tracing. When enabled, task executions create spans for observability.
pub fn setTelemetry(self: *Scheduler, telemetry: *Telemetry) voidParameters:
telemetry: Telemetry instance for tracing
clearTelemetry ​
Disable telemetry tracing.
pub fn clearTelemetry(self: *Scheduler) voidlistTasks ​
Get list of all scheduled tasks.
pub fn listTasks(self: *const Scheduler) []const ScheduledTaskUsage Example ​
const std = @import("std");
const logly = @import("logly");
const Scheduler = logly.Scheduler;
const SchedulerPresets = logly.SchedulerPresets;
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Create scheduler
var scheduler = try Scheduler.init(allocator);
defer scheduler.deinit();
// Add daily cleanup task using Presets
_ = try scheduler.addTask(
"log_cleanup",
.cleanup,
SchedulerPresets.dailyAt(2, 30), // Daily at 2:30 AM
SchedulerPresets.dailyCleanup("logs", 30), // Config helper
);
// Add hourly compression manually
_ = try scheduler.addTask(
"log_compression",
.compression,
.{ .interval = 3600 * 1000 }, // Every hour (ms)
.{
.path = "logs",
.min_age_seconds = 3600, // Compress files older than 1 hour
.file_pattern = "*.log",
},
);
// Start scheduler
try scheduler.start();
defer scheduler.stop();
// Check stats periodically
const stats = scheduler.getStats();
std.debug.print("Tasks executed: {d}\n", .{
stats.getExecuted(),
});
}Aliases ​
The Scheduler module provides convenience aliases:
| Alias | Method |
|---|---|
begin | start |
end | stop |
halt | stop |
statistics | getStats |
Additional State Methods ​
taskCount() usize- Returns number of scheduled tasksisRunning() bool- Returns true if scheduler is runninghasTasks() bool- Returns true if any tasks are scheduled
SchedulerPresets ​
Helper functions for creating common schedules and task configurations.
Schedule Presets ​
| Method | Description |
|---|---|
hourlyCompression() | Compression every hour |
everyMinutes(n) | Every N minutes |
every15Minutes() | Every 15 minutes (v0.1.5+) |
every30Minutes() | Every 30 minutes |
every6Hours() | Every 6 hours |
every12Hours() | Every 12 hours |
dailyAt(hour, minute) | Daily at specific time |
dailyMidnight() | Daily at midnight |
dailyMaintenance() | Daily at 2 AM |
weeklyCleanup() | Weekly on Sunday at 2 AM |
onceAfter(seconds) | Once after delay (v0.1.5+) |
healthCheckSchedule() | Every 5 minutes (v0.1.5+) |
metricsSchedule() | Every minute (v0.1.5+) |
Task Config Presets ​
| Method | Description |
|---|---|
dailyCleanup(path, days) | Delete logs older than N days |
compressThenDelete(path, days) | Compress then delete originals |
compressAndKeep(path, days) | Compress, keep both versions |
compressOnly(path, days) | Compress only, never delete |
archiveOldLogs(path, compress_days, delete_days) | Archive with age limits |
aggressiveCleanup(path, days, max_files) | Compress + file count limit |
hourlyArchive(path) | Compress files older than 1 day (v0.1.5+) |
compressOnRotation(path) | Compress just-rotated files (v0.1.5+) |
sizeBasedCompression(path, bytes) | Compress when size exceeds threshold (v0.1.5+) |
diskUsageTriggered(path, percent) | Compress when disk usage high (v0.1.5+) |
lowDiskSpaceTriggered(path, min_free) | Compress when disk space low (v0.1.5+) |
recursiveCompression(path, days) | Recursive directory compression (v0.1.5+) |
pub const SchedulerPresets = struct {
// Schedules
pub fn hourlyCompression() Schedule;
pub fn everyMinutes(n: u64) Schedule;
pub fn every15Minutes() Schedule; // v0.1.5+
pub fn every30Minutes() Schedule;
pub fn every6Hours() Schedule;
pub fn every12Hours() Schedule;
pub fn dailyAt(hour: u8, minute: u8) Schedule;
pub fn dailyMidnight() Schedule;
pub fn dailyMaintenance() Schedule;
pub fn weeklyCleanup() Schedule;
pub fn onceAfter(seconds: u64) Schedule; // v0.1.5+
pub fn healthCheckSchedule() Schedule; // v0.1.5+
pub fn metricsSchedule() Schedule; // v0.1.5+
// Task Configurations
pub fn dailyCleanup(path: []const u8, max_age_days: u64) TaskConfig;
pub fn compressThenDelete(path: []const u8, min_age_days: u64) TaskConfig;
pub fn compressAndKeep(path: []const u8, min_age_days: u64) TaskConfig;
pub fn compressOnly(path: []const u8, min_age_days: u64) TaskConfig;
pub fn archiveOldLogs(path: []const u8, compress_days: u64, delete_days: u64) TaskConfig;
pub fn aggressiveCleanup(path: []const u8, max_age_days: u64, max_files: usize) TaskConfig;
pub fn hourlyArchive(path: []const u8) TaskConfig; // v0.1.5+
pub fn compressOnRotation(path: []const u8) TaskConfig; // v0.1.5+
pub fn sizeBasedCompression(path: []const u8, bytes: u64) TaskConfig; // v0.1.5+
pub fn diskUsageTriggered(path: []const u8, percent: u8) TaskConfig; // v0.1.5+
pub fn lowDiskSpaceTriggered(path: []const u8, min_free: u64) TaskConfig; // v0.1.5+
pub fn recursiveCompression(path: []const u8, days: u64) TaskConfig; // v0.1.5+
};See Also ​
- Compression API - Log compression
- Rotation Guide - Log rotation
- Configuration Guide - Full configuration options
- Telemetry API - Distributed tracing integration
Telemetry Integration ​
The scheduler supports optional telemetry integration for distributed tracing. When enabled, each task execution creates a span with relevant attributes.
Setup ​
const logly = @import("logly");
// Create telemetry instance
var telemetry = try logly.Telemetry.init(allocator, .{
.provider = .file,
.exporter_file_path = "telemetry_spans.jsonl",
});
defer telemetry.deinit();
// Create scheduler with telemetry
var scheduler = try logly.Scheduler.init(allocator);
defer scheduler.deinit();
scheduler.setTelemetry(&telemetry);Span Attributes ​
Task execution spans include the following attributes:
| Attribute | Type | Description |
|---|---|---|
task.type | string | Task type (cleanup, compression, etc.) |
task.priority | string | Task priority level |
task.duration_ms | integer | Execution duration in milliseconds |
cleanup.files_deleted | integer | Files deleted (cleanup tasks) |
cleanup.bytes_freed | integer | Bytes freed (cleanup tasks) |
compression.files | integer | Files compressed (compression tasks) |
compression.bytes_saved | integer | Bytes saved (compression tasks) |
health.healthy | boolean | Health status (health_check tasks) |
metrics.log_count | integer | Log count (metrics_snapshot tasks) |
metrics.error_count | integer | Error count (metrics_snapshot tasks) |
Telemetry Metrics ​
The scheduler also records counter and gauge metrics:
scheduler.tasks_executed(counter): Incremented for each task executionscheduler.task_duration_ms(gauge): Task execution duration
Example Output ​
{
"trace_id": "abc123...",
"span_id": "def456...",
"name": "log_cleanup",
"kind": "internal",
"status": "ok",
"start_time": 1704067200000000000,
"end_time": 1704067200500000000,
"attributes": {
"task.type": "cleanup",
"task.priority": "normal",
"task.duration_ms": 500,
"cleanup.files_deleted": 5,
"cleanup.bytes_freed": 1048576
}
}