You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -29,65 +29,76 @@ See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for m
29
29
## Quickstart
30
30
31
31
1. Add `autometrics` to your project's dependencies:
32
-
```shell
33
-
pip install autometrics
34
-
```
32
+
33
+
```shell
34
+
pip install autometrics
35
+
```
35
36
36
37
2. Instrument your functions with the `@autometrics` decorator
37
38
38
-
```python
39
-
from autometrics import autometrics
40
-
41
-
@autometrics
42
-
def my_function():
43
-
# ...
44
-
```
45
-
46
-
3. Export the metrics for Prometheus
47
-
```python
48
-
# This example uses FastAPI, but you can use any web framework
49
-
from fastapi import FastAPI, Response
50
-
from prometheus_client import generate_latest
51
-
52
-
# Set up a metrics endpoint for Prometheus to scrape
53
-
# `generate_latest` returns metrics data in the Prometheus text format
54
-
@app.get("/metrics")
55
-
def metrics():
56
-
returnResponse(generate_latest())
57
-
```
58
-
59
-
4. Run Prometheus locally with the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) or [configure it manually](https://github.com/autometrics-dev#5-configuring-prometheus) to scrape your metrics endpoint
60
-
```sh
61
-
# Replace `8080` with the port that your app runs on
62
-
am start :8080
63
-
```
64
-
65
-
5. (Optional) If you have Grafana, import the [Autometrics dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) for an overview and detailed view of all the functionmetrics you've collected
39
+
```python
40
+
from autometrics import autometrics
41
+
42
+
@autometrics
43
+
defmy_function():
44
+
# ...
45
+
```
46
+
47
+
3. Configure autometrics by calling the `init` function:
# This example uses FastAPI, but you can use any web framework
59
+
from fastapi import FastAPI, Response
60
+
from prometheus_client import generate_latest
61
+
62
+
# Set up a metrics endpoint for Prometheus to scrape
63
+
# `generate_latest` returns metrics data in the Prometheus text format
64
+
@app.get("/metrics")
65
+
defmetrics():
66
+
return Response(generate_latest())
67
+
```
68
+
69
+
5. Run Prometheus locally with the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) or [configure it manually](https://github.com/autometrics-dev#5-configuring-prometheus) to scrape your metrics endpoint
70
+
71
+
```sh
72
+
# Replace `8080` with the port that your app runs on
73
+
am start :8080
74
+
```
75
+
76
+
6. (Optional) If you have Grafana, import the [Autometrics dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) for an overview and detailed view of all the function metrics you've collected
66
77
67
78
## Using `autometrics-py`
68
79
69
80
- You can import the library in your code and use the decorator for any function:
70
81
71
-
```py
72
-
from autometrics import autometrics
82
+
```python
83
+
from autometrics import autometrics
73
84
74
-
@autometrics
75
-
def sayHello:
76
-
return "hello"
85
+
@autometrics
86
+
def sayHello:
87
+
return"hello"
77
88
78
-
```
89
+
```
79
90
80
91
- To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing [the VSCode extension](https://marketplace.visualstudio.com/items?itemName=Fiberplane.autometrics).
81
92
82
-
> **Note**: We cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.
93
+
> **Note**: We cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.
83
94
84
-
- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`.
95
+
- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`.
85
96
86
-
> **Note**: Concurrency tracking is only supported when you set with the environment variable `AUTOMETRICS_TRACKER=prometheus`.
97
+
> **Note**: Concurrency tracking is only supported when you set with the environment variable `AUTOMETRICS_TRACKER=prometheus`.
87
98
88
99
- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.
89
100
90
-
> For these queries to work, include a `.env` file in your project with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
101
+
> For these queries to work, include a `.env` file in your project with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
91
102
92
103
## Dashboards
93
104
@@ -119,15 +130,15 @@ The library uses the concept of Service-Level Objectives (SLOs) to define the ac
119
130
120
131
In order to receive alerts, **you need to add a special set of rules to your Prometheus setup**. These are configured automatically when you use the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) to run Prometheus.
121
132
122
-
> Already running Prometheus yourself? [Read about how to load the autometrics alerting rules into Prometheus here](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules).
133
+
> Already running Prometheus yourself? [Read about how to load the autometrics alerting rules into Prometheus here](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules).
123
134
124
135
Once the alerting rules are in Prometheus, you're ready to go.
125
136
126
-
To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown above.
137
+
To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown above.
127
138
128
139
The `Objective` can be passed as an argument to the `autometrics` decorator, which will include the given function in that objective.
129
140
130
-
The example above used a success rate objective. (I.e., we wanted to be alerted when the error rate started to increase.)
141
+
The example above used a success rate objective. (I.e., we wanted to be alerted when the error rate started to increase.)
131
142
132
143
You can also create an objective for the latency of your functions like so:
133
144
@@ -191,8 +202,7 @@ Autometrics makes it easy to identify if a specific version or commit introduced
191
202
>
192
203
> autometrics-py will track support for build_info using the OpenTelemetry tracker via [this issue](https://github.com/autometrics-dev/autometrics-py/issues/38)
193
204
194
-
195
-
The library uses a separate metric (`build_info`) to track the version and, optionally, the git commit of your service.
205
+
The library uses a separate metric (`build_info`) to track the version and, optionally, the git commit of your service.
196
206
197
207
It then writes queries that group metrics by the `version`, `commit` and `branch` labels so you can spot correlations between code changes and potential issues.
198
208
@@ -230,7 +240,62 @@ exemplar collection by setting `AUTOMETRICS_EXEMPLARS=true`. You also need to en
230
240
231
241
## Exporting metrics
232
242
233
-
After collecting metrics with Autometrics, you need to export them to Prometheus. You can either add a separate route to your server and use the `generate_latest` function from the `prometheus_client` package, or you can use the `start_http_server` function from the same package to start a separate server that will expose the metrics. Autometrics also re-exports the `start_http_server` function with a preselected port 9464 for compatibility with other Autometrics packages.
243
+
There are multiple ways to export metrics from your application, depending on your setup. You can see examples of how to do this in the [examples/export_metrics](https://github.com/autometrics-dev/autometrics-py/tree/main/examples/export_metrics) directory.
244
+
245
+
If you want to export metrics to Prometheus, you have two options in case of both `opentelemetry` and `prometheus` trackers:
246
+
247
+
1. Create a route inside your app and respond with `generate_latest()`
248
+
249
+
```python
250
+
# This example uses FastAPI, but you can use any web framework
251
+
from fastapi import FastAPI, Response
252
+
from prometheus_client import generate_latest
253
+
254
+
# Set up a metrics endpoint for Prometheus to scrape
255
+
@app.get("/metrics")
256
+
defmetrics():
257
+
return Response(generate_latest())
258
+
```
259
+
260
+
2. Specify `prometheus` as the exporter type, and a separate server will be started to expose metrics from your app:
For the OpenTelemetry tracker, you have more options, including a custom metric reader. You can specify the exporter type to be `otlp-proto-http` or `otlp-proto-grpc`, and metrics will be exported to a remote OpenTelemetry collector via the specified protocol. You will need to install the respective extra dependency in order for this to work, which you can do when you install autometrics:
272
+
273
+
```sh
274
+
pip install autometrics[exporter-otlp-proto-http]
275
+
pip install autometrics[exporter-otlp-proto-grpc]
276
+
```
277
+
278
+
After installing it you can configure the exporter as follows:
0 commit comments