A complete walkthrough of instrumenting a Python service using the officialDocumentation Index
Fetch the complete documentation index at: https://docs.localops.co/llms.txt
Use this file to discover all available pages before exploring further.
prometheus_client library.
See the overview for the general approach.
1. Add the dependency
requirements.txt:
2. Register custom metrics and expose /metrics
start_http_server spins up a dedicated HTTP server on the given port that serves /metrics. The default registry
already includes process metrics (CPU, memory, open FDs) and Python runtime metrics (GC stats, threads).
3. Declare the metrics endpoint in ops.json
4. Visualize with a community dashboard
Python doesn’t have a single canonical community dashboard like JVM or Node.js, but the metrics exposed by the default registry (process_* and python_*) are well-supported by generic process dashboards.
| Dashboard | ID | Notes |
|---|---|---|
| Prometheus Stats | 2 | Generic dashboard for process_* metrics (CPU, memory, FDs) that work for any Python service using prometheus_client. |
| Node Exporter Full | 1860 | Host-level metrics if you also want OS-level context for your Python service. |