Skip to content
Edit this page

Compact Log Format

The log_compact parameter enables a condensed output format optimized for space-constrained environments.


Overview

The log_compact parameter is a boolean configuration option that reduces log verbosity while maintaining essential information. This format is particularly useful in environments with limited display width.

Supported Environments:

  • Jupyter Notebooks
  • Google Colab
  • Terminal emulators with narrow widths
  • Dashboard displays
  • Split-screen coding environments

Usage Guidelines

Use log_compact=True in the following scenarios:

  1. Interactive Notebooks
  2. Jupyter/Colab environments where horizontal space is limited
  3. Long log lines cause awkward wrapping
  4. Cleaner output improves readability

  5. Narrow Terminal Windows

  6. Terminal width under 120 characters
  7. Split-screen development setups
  8. Remote SSH sessions with limited width

  9. Monitoring Dashboards

  10. Real-time log displays on compact screens
  11. Embedded log viewers in web applications
  12. Mobile device log viewing

  13. Multi-Panel IDEs

  14. VS Code with multiple panels
  15. Vim/Emacs split windows
  16. Any IDE with reduced panel width

Avoid log_compact=True in these situations:

  1. Production Environments
  2. Standard terminal width (120+ characters)
  3. Log files for grep/awk parsing
  4. Log aggregation systems (ELK, Splunk)

  5. Detailed Debugging

  6. Complex error investigation
  7. Production issue analysis
  8. Cases requiring full context

  9. Automated Processing

  10. Log parsing scripts expecting standard format
  11. CI/CD pipeline analysis
  12. Structured log processing tools

Configuration


Configuration

Basic Configuration

from logly import logger

# Enable compact logging
logger.configure(log_compact=True)

logger.info("Starting data analysis")
logger.success("Loaded 1000 records from database")
logger.warning("Missing values detected in column 'age'")
logger.error("Failed to connect to API", retry_count=3)

Output Comparison

Standard Format (log_compact=False):

2025-01-15T10:30:15.123456+00:00 [INFO] Starting data analysis | module=__main__ | function=<module>
2025-01-15T10:30:15.234567+00:00 [SUCCESS] Loaded 1000 records from database | module=data_loader | function=load_data

Compact Format (log_compact=True):

[INFO] Starting data analysis
[SUCCESS] Loaded 1000 records from database

Parameter Details

logger.configure(
    level: str = "INFO",
    color: bool = True,
    log_compact: bool = False  # Default: False
)
Parameter Type Default Description
log_compact bool False Enable compact format. Reduces timestamp verbosity and optimizes spacing for narrow displays.

Format Changes in Compact Mode

Compact mode applies the following optimizations:

  1. Timestamps - Removed or abbreviated
  2. Module Information - Simplified or omitted
  3. Extra Fields - Only essential fields included
  4. Spacing - Minimal padding for compact display

Examples


Examples

Jupyter Notebook Workflow

from logly import logger
import pandas as pd

# Configure for notebook environment
logger.configure(
    level="INFO",
    log_compact=True,
    color=True,
    show_time=False
)

# Data processing pipeline
logger.info("Loading dataset")
df = pd.read_csv("sales_data.csv")
logger.success(f"Loaded {len(df)} rows")

logger.info("Cleaning data")
df_clean = df.dropna()
logger.success(f"Removed {len(df) - len(df_clean)} rows")

logger.info("Running analysis")
result = df_clean.groupby('category')['sales'].sum()
logger.success("Analysis complete", categories=len(result))

Output:

[INFO] Loading dataset
[SUCCESS] Loaded 1000 rows
[INFO] Cleaning data
[SUCCESS] Removed 50 rows
[INFO] Running analysis
[SUCCESS] Analysis complete (categories: 5)

Machine Learning Training

from logly import logger

# Configure for training environment
logger.configure(
    level="DEBUG",
    log_compact=True,
    color=True,
    show_module=False,
    show_function=False
)

# Training loop
epochs = 10
for epoch in range(1, epochs + 1):
    logger.info(f"Epoch {epoch}/{epochs}")

    train_loss = 0.5 / epoch
    val_loss = 0.6 / epoch

    logger.success(
        f"Epoch {epoch} complete",
        train_loss=round(train_loss, 4),
        val_loss=round(val_loss, 4)
    )

    if val_loss < 0.1:
        logger.success("Training complete - target loss reached")
        break

Development Environment

from logly import logger

# Configure for split-screen development
logger.configure(
    level="DEBUG",
    log_compact=True,
    color=True,
    show_time=True
)

def process_user_data(user_id: int):
    logger.debug(f"Processing user {user_id}")
    logger.info("Fetching user data")
    logger.success("User data retrieved", user_id=user_id)
    logger.info("Validating data")
    logger.success("Validation passed")
    return {"user_id": user_id, "status": "processed"}

result = process_user_data(123)
logger.success("Processing complete", result=result)

Advanced Configurations

Minimal Notebook Output

# Extremely compact for notebooks
logger.configure(
    log_compact=True,
    show_time=False,
    show_module=False,
    show_function=False
)

logger.info("Minimal output")
logger.success("Perfect for notebooks")

Output:

[INFO] Minimal output
[SUCCESS] Perfect for notebooks

Compact JSON Logging

# Compact JSON format
logger.configure(
    log_compact=True,
    json=True,
    pretty_json=False
)

logger.info("User login", user_id=123, role="admin")

Output:

{"level":"INFO","message":"User login","user_id":123,"role":"admin"}


Performance Characteristics

Resource Usage

Aspect Compact Mode Standard Mode
Memory Lower (shorter strings) Higher (full formatting)
Speed Slightly faster Slightly slower
File Size 20-40% smaller Standard size

Performance differences are typically negligible (< 1% memory, < 0.1ms per entry).

File Size Comparison

  • Standard Format: 1000 lines ≈ 150KB
  • Compact Format: 1000 lines ≈ 90KB

Best Practices

Interactive Development:

logger.configure(log_compact=True, show_time=False)

Production Environments:

logger.configure(log_compact=False)  # Full details

Color Enhancement:

logger.configure(log_compact=True, color=True)

Common Mistakes

Avoid in Production Logs:

# Not recommended - loses important context
logger.configure(log_compact=True)
logger.add("production.log")

Conflicting Settings:

# Contradictory - pretty_json is already verbose
logger.configure(log_compact=True, pretty_json=True)

Debugging Complex Issues:

# Not recommended - need full context
logger.configure(log_compact=True, level="DEBUG")


Environment Detection

For automatic environment detection, use this pattern:

import sys

# Detect notebook environment
IN_NOTEBOOK = 'ipykernel' in sys.modules

logger.configure(
    log_compact=IN_NOTEBOOK,
    show_time=not IN_NOTEBOOK
)

Summary

Aspect Compact Mode Standard Mode
Best For Notebooks, narrow terminals, dashboards Production logs, debugging
Output Length 20-40% shorter Full details
Readability Cleaner in constrained spaces More context
Performance Slightly faster (negligible) Slightly slower (negligible)
File Size Smaller Larger
Use Case Interactive development Production systems