Telemetry Framework Demo

Introduction

This demonstration showcases the utility of the Telemetry Framework integrated into a Node.js microservice called Processor. The framework provides comprehensive observability capabilities including real-time log monitoring, AI-powered log analysis, and an extensible plugin system for automation and integrations.

You can find the complete source code, documentation, and video demonstration at: https://doi.org/10.5281/zenodo.17290889 The AI assistant capabilities have been validated with extensive prompts and testing. Additional validation details are available at: https://doi.org/10.5281/zenodo.16265108

Table of Contents

1. Processor Microservice and Its Endpoints

The Processor microservice is a simplified Node.js application that demonstrates basic operational scenarios and telemetry integration. It provides several endpoints that generate different types of logs and telemetry data for demonstration purposes.

Available Endpoints:

  • GET / - Serves the main welcome page with navigation links (generates 1 log entry)
  • POST /process - Simulates data processing with artificial delay (generates 3 log entries: start, debug, finish)
  • GET /github-user/:username - Demonstrates missing API key error handling (generates 3 log entries including error)

Each endpoint is designed to generate meaningful telemetry data at different log levels (INFO, DEBUG, ERROR) to showcase how the Telemetry Framework captures and processes application events.

2. Telemetry Framework and Its Features

The Telemetry Framework is a comprehensive observability solution for Node.js applications. It can be easily integrated into any Express.js application with minimal configuration and provides powerful monitoring, analysis, and automation capabilities.

Main Dashboard

  • Real-time Node.js heap and runtime statistics monitoring
  • Quick navigation cards to Logs, AI Chat, and Plugin System
  • Central hub for all observability activities

Advanced Logs Panel

  • Real-time log streaming and collection
  • Advanced filtering by text, severity, service, or HTTP request
  • Pause/resume, clear logs, and adjustable retention settings
  • Detailed log inspection with trace context
  • MongoDB-like query syntax for power users

AI-Powered Analysis

  • Natural language queries about system logs and behavior
  • Automated root cause analysis and incident summaries
  • Transparent tool usage reporting for each analysis
  • Context-aware conversations for investigative workflows
  • Extensive prompt validation and testing

Extensible Plugin System

  • Custom JavaScript plugins for automation and integrations
  • Event-driven architecture for real-time responses
  • Runtime plugin management (add, remove, import/export)
  • Integration capabilities with external tools and services
  • Example use cases: alerting, notifications, data forwarding

3. How to Use This Demo

Initial Exploration

  1. Access the Telemetry UI: Click on the "Telemetry Framework UI" link above to open the dashboard.
    (Authentication may be required, password is available in the article submission)
  2. Explore the Dashboard: Take a look around the UI to familiarize yourself with the interface:
    • Check the real-time heap statistics at the top - toggle "Auto Update" for live monitoring
    • Browse the Logs panel to see existing log entries (endpoints are called automatically every few minutes to generate sample data)
    • Notice the different log levels (INFO, DEBUG, ERROR) and their timestamps

Generate Your Own Logs

While logs are generated automatically, let's create some manually to see the system in action:

  1. Use the Swagger UI: Open the "Processor Swagger UI" link and execute each endpoint:
    • Execute POST /process with some sample data (generates 3 log entries)
    • Execute GET /github-user/{username} with any username (generates 3 log entries including an error)
  2. Observe the Results: Return to the Telemetry UI and check the Logs panel:
    • You should see your new log entries appearing in real-time
    • Notice how the error logs are clearly marked and contain detailed context
    • Click on individual log entries to see full details and trace information

Interact with the AI Assistant

  1. Open AI Chat: Navigate to the AI Chat panel and ask questions about the logs you just generated:
    • Summarize the recent error logs
    • Why did the GitHub user call fail?
    • What happened in the last /process request?
    • Show me logs from the last 30 minutes (to limit token usage)
  2. Review AI Responses: The assistant will analyze the telemetry data and provide explanations, including which tools and queries were used for the analysis.

Additional Features to Explore

  1. Plugin System: Visit the Plugin System panel to see what's available:
    • There's an example plugin that alerts when "Inserted 0 Elements" appears in logs
    • Please don't delete this plugin so other reviewers can see it
    • You can create, import, or export your own plugins for automation
  2. Advanced Features: Look for "Useful Links" → "Dev Tools" (may be in a hidden menu):
    • Generate Custom Logs: Create custom log entries that will appear in the server console
    • Generate Mock Data: Click "Generate Mock Data" to create sample logs for testing filters and searches. Clear logs when you're done testing (new logs generate automatically every few minutes)
Tips for Best Experience:
  • When asking the AI about logs, specify time ranges like "last 30 minutes" to reduce token usage
  • Don't delete other people's chat sessions - if your chat bugs out, delete only your own session and create a new one
  • Use the mock data generator to test filtering capabilities with larger datasets

4. About This Demo

This demonstration showcases the integration of our Telemetry Framework into the Processor microservice. The framework is designed for easy integration into existing Node.js applications with minimal code changes. All telemetry data is processed in-memory for optimal performance and privacy - no data leaves your environment unless explicitly configured through custom exporters.

The framework represents a modern approach to application observability, combining traditional monitoring with AI-powered analysis and extensible automation capabilities.