[00:00:25]
<otm33> Pas de deuxième nextcloud sur la même machine ?
[00:07:51]
<tomlamo> non
[00:07:51]
<tomlamo> Reste à voir si tout fonctionne avec notifypush, mais c'est déjà un grand pas !
[00:07:51]
<otm33> C'est déjà ça...
[00:07:51]
<tomlamo> Yes bravo, nextcloud remarche !
[00:07:52]
<tomlamo> Un grand merci ! Je ne m'en serais pas sorti tout seul. Je m'arrête là pour ce soir, je retenterai l'upgrade une autre fois.
[00:07:53]
<OnEnemy> Success! I was able to federate. I'll close my forum post now
[00:52:58]
<tomlamo> Pour info, en désactivant snappymail, l'upgrade vers nextcloud 32 s'est bien passée. Puis je l'ai réactivée après avoir modifié les quelques configs qui vont bien. Merci encore à celleux qui m'ont aidé !
[14:29:30]
<@buny:nightcity.chat> Are there any apps on Yuno that allow ripping of CDs/DVDs/Blurays? I want to set up a Jellyfin on my server and being able to rip my physical collection right onto the server would be helpful. Less of a pain than setting up a dual boot too.
[14:44:33]
<miro5001> I believe every os has a cd/DVD ripper app. What os do you have?
[14:49:35]
<@buny:nightcity.chat> It's just the regular Yunohost install with the included Debian.
[15:06:24]
<@buny:nightcity.chat> At worst I'll slap puppy linux or some other low octane distro onto a thumb drive to accomplish the same thing, but it would be nice to keep the server up while I ripped.
[15:24:55]
<Gwên> otm33 Re ! Le service a recrashé aujourd'hui à 15:58:58. J'ai constaté que dnsmasq n'a plus rien envoyé ensuite, puisque les logs s'arrêtent à ce moment-là et j'ai repéré le crash à 16h11. Et voici à quoi ressemblent les logs des trois dernières secondes (j'envoie une vidéo parce qu'il y a trop à copier)
[15:30:00]
<Gwên> https://aria.im/_bifrost/v1/media/download/AU5S6Zr2GFhviDXNsurNuFcyglXwY62NnQS49MeWurLv2PSqJV6cwiX3zyRXI7rdhEGTJ6X6Xv36xVOteTF9fjFCeczhHShAAGdydHQuZnIvNzVsQU10Qlc3V2lIbXZyS1JrZWlrTDV6Q3NjMTM0bWw
[15:32:08]
<Gwên> J'ai *l'impression* que le problème vient de Matrix
[15:35:44]
<otm33> 😱
[15:35:45]
<otm33> C'est la fédération des instances qui fait ça ??
[15:35:47]
<Gwên> Bah on dirait bien
[15:35:53]
<otm33> Quelle est la taille du cache dans le fichier de conf de dnsmasq ?
[15:35:55]
<Gwên> Comment le trouver ?
[15:35:57]
<Gwên> Le truc, c'est que j'ai rendu mon répertoire public accessible sur matrixrooms.info pour que les salons de copains qui sont sur matrix.org soient trouvables. Je me demande du coup si c'est pas ça, la raison. Genre, est-ce qu'il n'y aurait pas un bail comme : à chaque fois que quelqu'un fait une requête sur matrixrooms, il interroge mon serveur (parmi d'autres) et que si y a trop de visites en même temps ça tient pas la charge.
[15:35:59]
<otm33> Me souviens plus du nom mais je crois que c'est assez transparent. De toute façon il n'y a pas grand chose dans le fichier.
[15:36:00]
<otm33> C'est quand même surprenant.
[15:36:01]
<Gwên> Non je veux dire comment connaître la taille du cache pardon
[15:40:33]
<otm33> C'est dans /etc/dnsmasq.conf je crois mais je ne peux pas vérifier
[15:40:33]
<Gwên> cache-size=256
[15:40:33]
<otm33> Je n'avais pas vu cela. Oui, c'est fort possible.
[15:45:56]
<otm33> Ça doit être le standard pour yunohost.
[15:46:24]
<otm33> Au moins tu sais pourquoi dnsmasq plante...
[16:01:00]
<Gwên> C'est clair
[16:07:38]
<Gwên> Du coup je ne vois que deux solutions :
1/ retirer mon serveur des résultats de matrixrooms (mais ça fait chier de devoir me passer de cet outil)
2/ relever le nombre de requêtes concurrentes autorisé.
Pour moi, la seconde solution est la plus "confortable" parce que j'aurais pas à toucher à Matrix. Mais la question que je me pose est : est-ce que ça ne risque pas de me créer plus de problèmes ?
[16:20:03]
<miro5001> Je crois que ceci pourrait résoudre le problème
https://github.com/matrix-org/synapse/issues/8338
[16:21:45]
<miro5001> > I got around #8118 by increasing dnsmasq's max simultaneous connections to 300 from the default of 100, and its number of cache entries to 4096 up from 256, then restarting synapse twice
[16:22:57]
<miro5001> Et ceci https://github.com/matrix-org/synapse/issues/4256#issue-387041502
[16:24:15]
<Gwên> Hiiiiiiiiiin
[16:28:04]
<Gwên> Donc ce serait un bug, ok
[16:31:07]
<Gwên> Mais du coup passer le nombre de requêtes à 300 n'aura pas de conséquences particulières pour mon serveur ?
[16:32:25]
<Gwên> Je suppose que non mais je préfère demander
[16:32:32]
<miro5001> Non, pas vraiment. L'un des dev rapporte que c'est prévisible dans le contexte de fédération, synapse peut envoyer des requêtes à une centaine de serveurs par seconde et chaque requête nécessite plusieurs requêtes dns
[16:32:33]
<m606> probablement un poil plus de ressources CPU/RAM, mais ça doit être négligeable
[16:32:36]
<Gwên> Ok, bon ça me rassure :)
[16:32:37]
<Gwên> Et du coup la question à 1 million
[16:32:38]
<Gwên> Comment je fais ? :jdicajdirien:
[16:52:54]
<m606> @buny:nightcity.chat Yunohost is Debian-based. So any software made for Debian should work on YNH.
check out https://b3n.org/automatic-ripping-machine/
It officially only support Ubuntu (Debian-based too) and Docker, but it may work on Debian. If it isn't you will want to the same CLI tools used by the project such as `abcde` for audio (https://wiki.debian.org/Ripping), `MakeMKV` for video, etc. which are available on Debian.
[16:54:34]
<m606> ajoute `dns-forward-max=300` dans le fichier de conf de dnsmasq (le même où tu avais ajouté les log queries pour le debug, que tu peux enlever d'ailleurs si tu ne l'a pas déjà fait), puis redémarre le service
[16:56:08]
<m606> `/etc/dnsmasq.conf`
[16:57:19]
<Gwên> Ahhhhh bah oui
[16:59:44]
<Gwên> Suis-je bête
[16:59:44]
<Gwên> Merci beaucoup !!
[16:59:44]
<Gwên> Je fais ça en rentrant chez moi et je vous tiens au jus
[17:02:14]
<m606> et si résolu, ça serait chouette que tu fasses un récap de ce qui a marché dans ton sujet de forum, pour la prochaine personne qui aura le pb
[17:02:25]
<Gwên> Oui, ce sera un plaisir !
[17:02:29]
<Gwên> Le point positif c'est que ce genre de problème me force à compulser de la documentation et à apprendre plein de lignes de commande donc j'en ressors avec pas mal d'expérience en plus pour les futurs problèmes
[17:02:30]
<Gwên> Mine de rien c'est important
[17:25:43]
<DJ Chase (fae/faer)> is this something i should be concerned about?
[17:25:51]
<DJ Chase (fae/faer)> https://aria.im/_bifrost/v1/media/download/AfzykaZrXgsb4H9Q_1-7YRcf6VV9kTSfumzBRjeBJiz3t6W5hiSVMjb8nJChjUqvSQh0j-z_vCYus6xN8C-bZd9CecznvhrAAHJpb3QuZmlyZWNoaWNrZW4ubmV0L0ZXVld1WUlNaVljREx5YXZ6RVZIV0Nicg
[17:25:51]
<DJ Chase (fae/faer)> (happened while updating)
[17:26:42]
<DJ Chase (fae/faer)> also pressing ok does nothing
[17:26:46]
<DJ Chase (fae/faer)> oh shit it deleted the whole app
[17:27:04]
<DJ Chase (fae/faer)> https://paste.yunohost.org/faqixozoso
[17:27:54]
<DJ Chase (fae/faer)> > `2026-03-07 12:19:56,748: WARNING - /var/cache/yunohost/app_tmp_work_dirs/app_4gh4j3ta/restore: line 14: npm: command not found`
reinstall npm and try again?
[17:28:25]
<otm33> La cause serait donc identifiée. il faudrait peut-être aussi ajouter un avertissement sur le package de synapse. Gwên il faudra aussi prendre en compte le fait que la régénération de dnsmasq.conf fera sauter l'ajout de la conf particulière pour dns-forward-max.
[17:29:40]
<DJ Chase (fae/faer)> i can't open the backup file in the web interface?
[17:29:50]
<Chatpitaine Caverne> Cool Gwên que tu aies trouvé ce bidule. Et merci d'avoir fouillé autant. On va en profitter tou.te.s au besoin.
J'ai une question. Y-a-t-il un moyen de trouver tous les services associés à une application ?
Je voudrais adopter Borg backup, mais le script ne prévoit pas d'arrêt des services et je préfère pour des raisons d'intégrité stopper les services au moment de la sauvegarde d'une application (ou bien je suis trop sensible à ce que la loi de Murphy peut faire ?)
Au pire, mes scripts actuels stoppent à façon les services en "dur" alors je peux continuer.
[17:29:51]
<DJ Chase (fae/faer)> oh no hang on it just took a while
[17:37:45]
<DJ Chase (fae/faer)> failed again even after installing npm
[17:37:45]
<Gwên> En l'occurrence je suis sur Conduit, je ne sais pas si le problème est reproductible avec Synapse
[17:37:45]
<Gwên> Ok, ça me semble logique
[17:37:46]
<DJ Chase (fae/faer)> https://paste.yunohost.org/vawiduliri
[17:37:46]
<Chatpitaine Caverne> Again node(.js) what did it find this time ...
[17:39:42]
<DJ Chase (fae/faer)> oops
[17:42:31]
<DJ Chase (fae/faer)> > `2026-03-07 12:30:47,769: WARNING - Variable $path_with_nodejs wasn't initialized when trying to replace __PATH_WITH_NODEJS__ in /etc/systemd/system/homarr-tasks.service`
how do i fix this?
[17:50:35]
<Gwên> If you use Element you can reduce the size of you can use the "code" markdown ^^
[17:50:44]
<DJ Chase (fae/faer)> yeah i just meant to paste one line
[17:50:47]
<DJ Chase (fae/faer)> the yunopaste link is above
[17:50:50]
<Gwên> Put a ``` at the beginning and at the end of your lines
[17:50:54]
<DJ Chase (fae/faer)> anybody have any idea why npm would randomly be uninstalled, and what the correct value for `__PATH_WITH_NODEJS__` should be?
[17:50:55]
<otm33> This may help : run `N_PREFIX=/opt/node_n/ /usr/share/yunohost/helpers.v2.1.d/vendor/n/n install 24` and retry upgrade
[17:50:57]
<DJ Chase (fae/faer)> rn i need to restore not upgrade
[17:50:58]
<DJ Chase (fae/faer)> it deleted the whole app
[17:51:54]
<otm33> I meant restore
[17:51:55]
<DJ Chase (fae/faer)> same error
https://paste.yunohost.org/ahesowutot
[17:51:56]
<DJ Chase (fae/faer)> hang on i think i might have copy pasted the command you gave me wrong lol let me try again
[17:51:57]
*DJ Chase (fae/faer) for some reason can't reliably copy/paste rn
[17:51:58]
<DJ Chase (fae/faer)> https://paste.yunohost.org/raw/pijedoruti
[17:51:58]
<DJ Chase (fae/faer)> what's the correct value for `$path_with_nodejs`?
[17:51:58]
<DJ Chase (fae/faer)> still failed
[17:51:59]
<DJ Chase (fae/faer)> also why isn't that set?
[17:53:51]
<otm33> In your system it should be something like `PATH=/opt/node_n/n/versions/node/24.??.??/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games"` but this issue is a bit weird ...
[17:54:46]
<DJ Chase (fae/faer)> admin user's path: `/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games`
sudo's path: `/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games`
[17:55:21]
<DJ Chase (fae/faer)> admin user's path: `/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games`
root's path: `/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin`
[17:56:06]
<DJ Chase (fae/faer)> i have not touched anything config-wise that would affect this
[17:58:36]
<DJ Chase (fae/faer)> try setting PATH to what it should be temporarily and restoring? or figure out why PATH is borked and fix it and then restore?
[18:12:55]
<DJ Chase (fae/faer)> i'm going to try setting path just for the restore command for now
[18:15:18]
<DJ Chase (fae/faer)> that still didn't work!?
https://paste.yunohost.org/raw/yawumegoxa
[18:15:50]
<DJ Chase (fae/faer)> the command i used:
```
# PATH="/opt/node_n/n/versions/node/24.??.??/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" yunohost restore homarr-pre-upgrade1
```
[18:16:06]
<DJ Chase (fae/faer)> the command i used:
```
# PATH="/opt/node_n/n/versions/node/24.??.??/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" yunohost backup restore homarr-pre-upgrade1
```
[18:19:10]
<DJ Chase (fae/faer)> so i think the system path probably needs to be fixed then
anybody know where to do that?
[18:43:45]
<DJ Chase (fae/faer)> oh of course lol
[18:43:45]
<otm33> `/opt/node_n/n/versions/node/24.??.??` => I meant you have to replace this with latest minor version of 24 in /opt/node\_n/n/versions/node/
[18:48:05]
<DJ Chase (fae/faer)> i thought that was a glob
[18:49:43]
<DJ Chase (fae/faer)> i only have 18.20.8 and 22.22.0
[18:57:58]
<otm33> Could you re-run `N_PREFIX=/opt/node_n/ /usr/share/yunohost/helpers.v2.1.d/vendor/n/n install 24` and check if it installs version 24 in /opt/node_n/n/versions/node/ ?
[18:59:57]
<miro5001> We should think about compiling homarr in the repo
[18:59:57]
<DJ Chase (fae/faer)> still failed
https://paste.yunohost.org/raw/jacapazola
[18:59:57]
<DJ Chase (fae/faer)> now it did
[18:59:57]
<DJ Chase (fae/faer)> any idea why it randomly broke though?
[18:59:57]
<DJ Chase (fae/faer)> weird
[18:59:57]
<DJ Chase (fae/faer)> i'll try restoring again
[19:02:32]
<DJ Chase (fae/faer)> (or like, how to fix it lol)
[19:03:09]
<DJ Chase (fae/faer)> oh if i manually set `$path_with_nodjs` maybe that will work
[19:03:18]
<DJ Chase (fae/faer)> presumably not
https://paste.yunohost.org/eroxetahip
[19:03:18]
<DJ Chase (fae/faer)> is it supposed to be the same path that's in PATH?
[19:03:19]
<otm33> OK, then restore with --no-remove-on-failure (it will fail) but you should be able to install the nodjs version required
[19:03:19]
<DJ Chase (fae/faer)> manually setting `$path_with_nodejs` or not?
[19:07:07]
<otm33> just like you did in your last attempt
[19:08:32]
<DJ Chase (fae/faer)> okay i have done that
https://paste.yunohost.org/oxaziherey
[19:08:45]
<DJ Chase (fae/faer)> how do i install the required nodejs version?
[19:09:54]
<otm33> N_PREFIX=/opt/node_n/ /usr/share/yunohost/helpers.v2.1.d/vendor/n/n install 24.13.1
[19:10:26]
<DJ Chase (fae/faer)> okay
[19:10:54]
<DJ Chase (fae/faer)> try restoring again now?
[19:15:11]
<DJ Chase (fae/faer)> or restarting the services?
[19:19:11]
<otm33> Try restarting
[19:24:03]
<DJ Chase (fae/faer)> they all `Failed at step EXEC spawning /opt/node_n/n/versions/node/24.13.1/bin/node: No such file or directory`
[19:24:43]
<DJ Chase (fae/faer)> hmm `/opt/node_n/n/versions/node/24.13.1/bin/node` definitely does exist
[19:24:44]
<DJ Chase (fae/faer)> ```
# ls -l /opt/node_n/n/versions/node/24.13.1/bin/node
-rwxr-xr-x 1 root root 122523096 Feb 9 23:59 /opt/node_n/n/versions/node/24.13.1/bin/node
```
[19:24:44]
<otm33> ???
[19:24:49]
<otm33> Try systemctl reset-failed homarr-*
[19:24:49]
<DJ Chase (fae/faer)> oh restart ofc sorry
[19:24:49]
<DJ Chase (fae/faer)> and then restart or send logs?
[19:24:50]
<DJ Chase (fae/faer)> failed
https://paste.yunohost.org/ucexetuhuq
[19:24:50]
<otm33> Well, try restarting first...
[19:30:25]
<DJ Chase (fae/faer)> just `homarr` is down
[19:32:08]
<DJ Chase (fae/faer)> `homarr-tasks` and `homarr-wss` are up now though
[19:32:12]
<DJ Chase (fae/faer)> `homarr-tasks` log: https://paste.yunohost.org/okihobelec
[19:32:16]
<otm33> Yunohost app shell homarr
[19:32:16]
<otm33> Ok.
[19:32:17]
<DJ Chase (fae/faer)> ```
/usr/share/yunohost/helpers.v2.1.d/0-utils: line 459: systemctl: command not found
/usr/share/yunohost/helpers.v2.1.d/0-utils: line 459: sed: command not found
/usr/share/yunohost/helpers.v2.1.d/0-utils: line 483: systemctl: command not found
/usr/share/yunohost/helpers.v2.1.d/0-utils: line 487: su: command not found
/usr/share/yunohost/helpers.v2.1.d/0-utils: line 44: sleep: command not found
```
[19:32:19]
<otm33> yunohost app shell homarr
[19:32:20]
<DJ Chase (fae/faer)> that's what i did
[19:32:21]
<DJ Chase (fae/faer)> it gave me that and exited
[19:32:22]
<otm33> mkdir appdata
[19:32:22]
<otm33> cd /var/www/homarr
[19:32:25]
<DJ Chase (fae/faer)> `mkdir: cannot create directory ‘appdata’: File exists`
[19:33:20]
<otm33> chown -R homarr:homarr appdata
[19:33:22]
<DJ Chase (fae/faer)> did that, still get the errors on `app shell`
[19:36:58]
<otm33> Running out if ideas... Helping you just using my phone is not so easy...
[19:37:15]
<DJ Chase (fae/faer)> no worries
[19:37:16]
<DJ Chase (fae/faer)> thank you for your help so far
[19:54:43]
<DJ Chase (fae/faer)> so my friend says that pnpm isn't installed for 24.13.1 and wants to know what the proper way of installing it on yunohost is
[19:59:51]
<Katie (KittyKatt) [she/they]> This is the log from homarr trying to start:
```
Mar 07 14:18:56 (pnpm)[1583862]: homarr.service: Failed to locate executable /opt/node_n/n/versions/node/24.13.1/bin/pnpm: No such file or directory
```
[20:01:29]
<DJ Chase (fae/faer)> (`/opt/node_n/n/versions/node/24.13.1/bin` exists though)
[20:02:36]
<otm33> can you share the homarr service log?
[20:04:44]
<DJ Chase (fae/faer)> (it's linked somewhere in this room but idk which one it is)
[20:04:44]
<Katie (KittyKatt) [she/they]> `homarr` specifically
[20:04:44]
<DJ Chase (fae/faer)> katie which service is that log from?
[20:04:45]
<Katie (KittyKatt) [she/they]> https://paste.yunohost.org/ucexetuhuq
[20:04:45]
<Katie (KittyKatt) [she/they]> I had it open still lol
[20:04:45]
<DJ Chase (fae/faer)> beat me too it thanks
[20:04:46]
<otm33> this one is outdated, right: it shows Mar 07 14:19:01
[20:04:46]
<DJ Chase (fae/faer)> same but i have like 12 logs open
[20:04:46]
<Katie (KittyKatt) [she/they]> Results of `find /opt/node_n/n/versions/node -name '*pnpm*'`
https://paste.yunohost.org/raw/xawoloceni
[20:04:47]
<DJ Chase (fae/faer)> up to date version: https://paste.yunohost.org/wuniwipeyi
[20:04:47]
<Katie (KittyKatt) [she/they]> That looks like nearly the same log lol
[20:08:37]
<DJ Chase (fae/faer)> yeah but there's a few new entries
[20:08:43]
<Katie (KittyKatt) [she/they]> Yeah, just systemd killing it fast because it's trying to restart too fast.
[20:08:45]
<DJ Chase (fae/faer)> should i try restarting again since it's been a while?
[20:08:46]
<Katie (KittyKatt) [she/they]> I'd figure out why `pnpm` doesn't exist at the path it expects it to first, personally.
[20:08:47]
<DJ Chase (fae/faer)> how would i do that?
[20:08:48]
<Katie (KittyKatt) [she/they]> That's an amazing question maybe the devs can answer
[20:08:49]
<Katie (KittyKatt) [she/they]> I don't want to suggest something that might mess up the way yunohost is handling those things.
[20:08:49]
<DJ Chase (fae/faer)> lol
[20:09:47]
<DJ Chase (fae/faer)> fair
[20:09:51]
<DJ Chase (fae/faer)> fortunately it's just homarr so it's okay if it has a bit of downtime
[20:10:03]
<DJ Chase (fae/faer)> wait that doesn't make sense it's the homepage lol
[20:10:26]
<DJ Chase (fae/faer)> (this is how we installed node 24.13.1 if that helps, katie)
[20:20:45]
<Katie (KittyKatt) [she/they]> Unfortunately, that doesn't help me a whole lot because I'm not privy to what that actually does.
[20:22:19]
<DJ Chase (fae/faer)> fair
[20:32:15]
<Chatpitaine Caverne> I'm searching inside the homarr ynh\_github sources. By which magical process is Environment="PATH=**PATH\_WITH\_NODEJS**" the variable PATH\_WITH\_NODEJS given a value (not sure correct english way of saying it). I don't find anything to give it a value.
Cause, it seems, we need this correctly fullfilled but we can't access the app env to do it...
[20:32:17]
<Chatpitaine Caverne> I'm searching inside the ynh_homarr github sources. By which magical process is Environment="PATH=**PATH\_WITH\_NODEJS**" the variable PATH\_WITH\_NODEJS given a value (not sure correct english way of saying it). I don't find anything to give it a value.
Cause, it seems, we need this correctly fullfilled but we can't access the app env to do it...
[20:32:18]
<DJ Chase (fae/faer)> currently trying to install homarr as if it's a new install and then copy files from the backup
[20:32:20]
<DJ Chase (fae/faer)> it installed properly at least so that's good
[20:49:17]
<DJ Chase (fae/faer)> hmm i actually think the backup might have been borked and that's why we can't get it to work
[20:52:29]
<DJ Chase (fae/faer)> i'm not finding any of the boards/icons/etc in the backup
[20:56:32]
<Chatpitaine Caverne> Regarding to that. It could be.
[20:56:34]
<DJ Chase (fae/faer)> lovely
[20:56:40]
<Chatpitaine Caverne> Do you have recent full backup ?
[20:56:41]
<DJ Chase (fae/faer)> would that be `archivist_backup`?
[20:56:45]
<DJ Chase (fae/faer)> it's from a week ago
[20:56:45]
<Chatpitaine Caverne> If you use archivist, yeah. Hope, you have a not old one.
[20:56:46]
<DJ Chase (fae/faer)> i should probably back up more often than that 🙃
[20:56:48]
<Chatpitaine Caverne> I don't know homarr. Is it moving a lot week by week ?
Maybe you can try to keep a copy of the actual datas of this app, if you know the folders. But if it has also database, integrity is not possible with only datas...
[20:56:49]
<DJ Chase (fae/faer)> well a full backup is the system right?
[21:13:32]
<Chatpitaine Caverne> If your pre-upgrade backup isn't fully completed. I'm affraid yes.
If datas can be copied before (folders and database), maybe you gonna be abble to get if back, if there is a database and is also preserved in your context. Data folder + database, maybe you gonna be abble to get it all back.
[21:14:28]
<DJ Chase (fae/faer)> awesome looks like i'm remaking the homarr boards
[21:16:59]
<DJ Chase (fae/faer)> at least it's not userdata
[21:19:44]
<DJ Chase (fae/faer)> thanks for your help everybody
[21:21:17]
<DJ Chase (fae/faer)> (also i have updated my archivist backups to every three days instead of every week)
[21:21:25]
<Chatpitaine Caverne> I don't know your constraint regarding disk space and size of backups. I was using a backup script home made but a bit like archivist. Those are no incremental backups, so when size and space are growing, if become difficult to do lot of backups. I am on twice a week full backup and everyday most important apps.
I'm moving to Borg backup which is incremental, compressed and encrypted backup system. Incremental means it only backup the difference since last backup. With this, I gonna be able to do full backup everyday. That's not an easy move, but I have no choice to do it due to increasing size of backups and Go are expensive.
[21:21:25]
<Chatpitaine Caverne> I don't know your constraint regarding disk space and size of backups. I was using a backup script home made but a bit like archivist. Those are no incremental backups, so when size and space are growing, if become difficult to do lot of backups. I am on twice a week full backup and everyday most important apps.
I'm moving to Borg backup which is incremental, compressed and encrypted backup system. Incremental means it only backup the difference since last backup. With this, I gonna be able to do full backup everyday. That's not an easy move (more psychologically that technically), but I have no choice to do it due to increasing size of backups and Go are expensive.
[21:21:25]
<Chatpitaine Caverne> I don't know your constraint regarding disk space and size of backups. I was using a backup script home made but a bit like archivist. Those are no incremental backups, so when size and space are growing, it becomes difficult to do lot of backups. I am on twice a week full backup and everyday most important apps.
I'm moving to Borg backup which is incremental, compressed and encrypted backup system. Incremental means it only backup the difference since last backup. With this, I gonna be able to do full backup everyday. That's not an easy move (more psychologically that technically), but I have no choice to do it due to increasing size of backups and Go are expensive.
[21:21:26]
<Chatpitaine Caverne> I don't know your constraint regarding disk space and size of backups. I was using a backup script home made but a bit like archivist. Those are no incremental backups, so when size and space are growing, it becomes difficult to do lot of backups. I am on twice a week full backup and everyday most important apps.
I'm moving to Borg backup which is incremental, compressed and encrypted backup system. Incremental means it only backup the difference since last backup. With this, I gonna be able to do full backup everyday. That's not an easy move (more psychologically than technically), but I have no choice to do it due to increasing size of backups and Go are expensive.
[21:21:26]
<Chatpitaine Caverne> I don't know your constraint regarding disk space and size of backups. I was using a backup script home made but a bit like archivist. Those are no incremental backups, so when size and space are growing, it becomes difficult to do lot of backups. I am on twice a week full backup and everyday most important apps.
I'm moving to Borg backup which is incremental, compressed and encrypted backup system. Incremental means it only backup the difference since last backup. With this, I gonna be able to do full backup everyday. That's not an easy move (more psychologically than technically), but I have no choice to do it due to increasing size of backups and GigaOctets are expensive.
[22:25:40]
<miro5001> Node should be provisioned by the script
[22:30:26]
<Gwên> On va bien voir
[22:30:26]
<Gwên> Hé bah ça a pas marché
[22:31:08]
<Gwên> Je relève à 500, j'augmente la taille du cache et je redémarre Conduit
[22:33:26]
<miro5001> Il faut redémarrer dnsmasq
[22:38:11]
<miro5001> Try installing homarr and replace db.sqlite and appdata with the ones from your pre-upgrade backup with the correct ownership