Friday, March 20, 2026
support@conference.yunohost.org
March
Mon Tue Wed Thu Fri Sat Sun
            1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27 28 29
30 31          

[12:57:41] <orhtej2> > <@u9000:u9.tel> is there an official/correct way to set up server alerts with something like ntfy?

No, there's an unofficial gotify integration
[12:57:42] <orhtej2> https://github.com/YunoHost-Apps/yuno_goti_notify_ynh
[13:32:52] <FbIN> orhtej2: 👌
[15:37:24] <DJ Chase (fae/faer)> fwiw there is an ntfy app in main
[15:39:07] <FbIN> > fwiw there is an ntfy app in main
yes, but gotify is a better alternative for privacy enthusiasts :)
[15:39:33] <DJ Chase (fae/faer)> fair
[15:39:48] <FbIN> :)
[15:40:28] <DJ Chase (fae/faer)> actually ntfy seems to be setup by default to create restricted channels in the web page, meaning a token is required to post/view
[15:40:46] <DJ Chase (fae/faer)> also use a uuid for the channel name 🙃
[15:42:48] <FbIN> yes, but the ntfy privacy started to skew, and hence gotify came into picture. But I get your point.
[15:47:51] <DJ Chase (fae/faer)> that makes sense
[15:52:52] <DJ Chase (fae/faer)> found what i was looking for: ssh apprise and uptime kuma
[15:57:49] <FbIN> try Glances
[16:56:18] <DJ Chase (fae/faer)> i tried glances a while ago -- it seemed basically like htop but in the browser and also without perms to see other processes
[16:56:46] <DJ Chase (fae/faer)> added all the services to kuma and actually found some issues. glad i did that :)
[16:57:24] <FbIN> agreed, glances is simple for people who cannot or do not have access to terminal or at times are not near to the terminal
[16:57:39] <FbIN> kuma is good, but resource hungry in itself, but yeah good otherwise.
[16:59:55] <DJ Chase (fae/faer)> in the past i would have written a shell script that emails my phone number and put it in /usr/share/sbin and run it in a cronjob
[16:59:55] <DJ Chase (fae/faer)> yeah well aware this is definitely a heavy way of setting this up
[16:59:55] <DJ Chase (fae/faer)> but i'm trying to do things the 'proper' way this time to keep everything easy to maintain
[17:01:41] <FbIN> > in the past i would have written a shell script that emails my phone number and put it in /usr/share/sbin and run it in a cronjob
Is this not the best way?
[17:02:06] <FbIN> I mean I am thinking of running a shell script myself for something similar
[17:06:29] <DJ Chase (fae/faer)> also for stuff like ups monitoring it doesn't need to continually run it could be a trigger
[17:06:29] <DJ Chase (fae/faer)> which is much lighter, but also horrible to maintain
[17:06:29] <DJ Chase (fae/faer)> could make an argument that it should be systemd instead if that's your init system
[17:06:30] <DJ Chase (fae/faer)> (also a problem is that if you don't document how the monitoring works then it's really easy to miss a shell script in /usr/share/sbin)
[17:06:30] <DJ Chase (fae/faer)> but also email-to-sms bridging is somewhat slow and unreliable
[17:12:31] <FbIN> > could make an argument that it should be systemd instead if that's your init system
speaking of systemd: https://cybersecurity88.com/news/ubuntu-cve-2026-3888-timing-flaw-in-systemd-cleanup-enables-root-privilege-escalation/
[17:12:47] <FbIN> > also for stuff like ups monitoring it doesn't need to continually run it could be a trigger
yes, that makes sense
[17:12:56] <FbIN> > (also a problem is that if you don't document how the monitoring works then it's really easy to miss a shell script in /usr/share/sbin)
how very true.
[22:17:09] <DJ Chase (fae/faer)> i got grafana to work with ntfy, but it was a pain in the ass to figure out so here's how to do it if you want:

# Setting up the alert contact point

1. In Grafana, go to Home > Alerting > Contact points and press "create contact point".
1. Set the integration to "webhook" and the URL to "https://ntfy.sh/" (or your self hosted version). Importantly, you want to link to the root URL, not the specific channel you're trying to send to.
1. In "Optional webhook settings", set "HTTP method" to POST and paste your ntfy access token in "Authorization Header - Credentials".
1. Scroll down to "Custom Payload", press "add", press "edit payload template", and then press "Enter custom payload template".
1. Enter the below JSON into the custom payload template (but update it for whatever event you're trying to send an alert about)

```json
{
"topic": "ntfy-topic-name",
"message": "RAM usage is above 80%!",
"title": "80% RAM Usage",
"tags": ["warning"],
"priority": 5
}
```

# Setting up the alert condition

For this example we're going to assume you want to alert when RAM usage is at 80%. Also, it is tremendously helpful to have a dashboard like Prometheus Node Exporter Full already setup because then you can just copy the queries.

6. In Grafana, go to Home > Alerting > Alert rules and press "New alert rule"
6. In a new tab, go to the dashboard with the metric you want to copy, and press "edit dashboard".
6. When you look at the code for the RAM Used metric, we see it's the following:

```
clamp_min((1 - (node_memory_MemAvailable_bytes{instance="$node", job="$job"} / node_memory_MemTotal_bytes{instance="$node", job="$job"})) * 100, 0)
```

9. Back in the alert creation tab, select `node_memory_MemAvailable_bytes` as the first metric.
10. We know that the filters we need are `instance="$node", job="$job"`, so set them as such. You'll need the actual values of the variables though, which are at the top of the dashboard page.
11. Now add a second query for `node_memory_MemTotal_bytes`.
12. Then in the "Expressions" section of the page, delete the threshold expression.
13. Add a math expression. The original formula was `node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100`. We're going to use the same formula here, but the variable syntax is different, so it will actually need to be `$A / $B * 100`
12. Finally, add a threshold expression. Set "Input" to C (which is the math expression we just made), and "is above" to 80.
13. Press "Set 'D' as alert condition".
14. Set up sections three and four on that page however you like.
15. In "5. Configure notifications", set the contact point to the one you created earlier.
16. Press save
[22:17:19] <DJ Chase (fae/faer)> i got grafana to work with ntfy, but it was a pain in the ass to figure out so here's how to do it if you want:

# Setting up the alert contact point

1. In Grafana, go to Home > Alerting > Contact points and press "create contact point".
1. Set the integration to "webhook" and the URL to "https://ntfy.sh/" (or your self hosted version). Importantly, you want to link to the root URL, not the specific channel you're trying to send to.
1. In "Optional webhook settings", set "HTTP method" to POST and paste your ntfy access token in "Authorization Header - Credentials".
1. Scroll down to "Custom Payload", press "add", press "edit payload template", and then press "Enter custom payload template".
1. Enter the below JSON into the custom payload template (but update it for whatever event you're trying to send an alert about)

```json
{
"topic": "ntfy-topic-name",
"message": "RAM usage is above 80%!",
"title": "80% RAM Usage",
"tags": ["warning"],
"priority": 4
}
```

# Setting up the alert condition

For this example we're going to assume you want to alert when RAM usage is at 80%. Also, it is tremendously helpful to have a dashboard like Prometheus Node Exporter Full already setup because then you can just copy the queries.

6. In Grafana, go to Home > Alerting > Alert rules and press "New alert rule"
6. In a new tab, go to the dashboard with the metric you want to copy, and press "edit dashboard".
6. When you look at the code for the RAM Used metric, we see it's the following:

```
clamp_min((1 - (node_memory_MemAvailable_bytes{instance="$node", job="$job"} / node_memory_MemTotal_bytes{instance="$node", job="$job"})) * 100, 0)
```

9. Back in the alert creation tab, select `node_memory_MemAvailable_bytes` as the first metric.
10. We know that the filters we need are `instance="$node", job="$job"`, so set them as such. You'll need the actual values of the variables though, which are at the top of the dashboard page.
11. Now add a second query for `node_memory_MemTotal_bytes`.
12. Then in the "Expressions" section of the page, delete the threshold expression.
13. Add a math expression. The original formula was `node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100`. We're going to use the same formula here, but the variable syntax is different, so it will actually need to be `$A / $B * 100`
12. Finally, add a threshold expression. Set "Input" to C (which is the math expression we just made), and "is above" to 80.
13. Press "Set 'D' as alert condition".
14. Set up sections three and four on that page however you like.
15. In "5. Configure notifications", set the contact point to the one you created earlier.
16. Press save
[22:19:14] <DJ Chase (fae/faer)> i got grafana to work with ntfy, but it was a pain in the ass to figure out so here's how to do it if you want:

# Setting up the alert contact point

1. In Grafana, go to *Home > Alerting > Contact points* and press "create contact point".
1. Set the integration to "webhook" and the URL to "https://ntfy.sh/" (or your self hosted version). Importantly, you want to link to the root URL, not the specific channel you're trying to send to.
1. In "Optional webhook settings", set "HTTP method" to POST and paste your ntfy access token in "Authorization Header - Credentials".
1. Scroll down to "Custom Payload", press "add", press "edit payload template", and then press "Enter custom payload template".
1. Enter the below JSON into the custom payload template (but update it for whatever event you're trying to send an alert about)

```json
{
"topic": "ntfy-topic-name",
"message": "RAM usage is above 80%!",
"title": "80% RAM Usage",
"tags": ["warning"],
"priority": 4
}
```

# Setting up the alert condition

For this example we're going to assume you want to alert when RAM usage is at 80%. Also, it is tremendously helpful to have a dashboard like Prometheus Node Exporter Full already setup because then you can just copy the queries.

6. In Grafana, go to *Home > Alerting > Alert rules* and press "New alert rule"
6. In a new tab, go to the dashboard with the metric you want to copy, and press "edit dashboard".
6. When you look at the code for the RAM Used metric, we see it's the following:

```
clamp_min((1 - (node_memory_MemAvailable_bytes{instance="$node", job="$job"} / node_memory_MemTotal_bytes{instance="$node", job="$job"})) * 100, 0)
```

9. Back in the alert creation tab, select `node_memory_MemAvailable_bytes` as the first metric.
10. We know that the filters we need are `instance="$node", job="$job"`, so set them as such. You'll need the actual values of the variables though, which are at the top of the dashboard page.
11. Now add a second query for `node_memory_MemTotal_bytes`.
12. Then in the "Expressions" section of the page, delete the threshold expression.
13. Add a math expression. The original formula was `node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100`. We're going to use the same formula here, but the variable syntax is different, so it will actually need to be `$A / $B * 100`
12. Finally, add a threshold expression. Set "Input" to C (which is the math expression we just made), and "is above" to 80.
13. Press "Set 'D' as alert condition".
14. Set up sections three and four on that page however you like.
15. In "5. Configure notifications", set the contact point to the one you created earlier.
16. Press save
[22:19:42] <miro5001> You should write a post in the forum so it helps others
[22:19:48] <DJ Chase (fae/faer)> i should