[12:59:37]
<Yunohost Git/Infra notifications> [yunohost] Salamandar pushed to fpu-cron_to_timers: utils: jinja_filters: Add stable_shuffle jinja filter for monthly-stable shuffling This allows e.g resolver list to be ... ([f389b8d5](https://github.com/YunoHost/yunohost/commit/f389b8d502e4410fc21b5f2b38f22ab803242f3e))
[12:59:37]
<Yunohost Git/Infra notifications> [yunohost] Salamandar pushed to fpu-cron_to_timers: dnsmasq: generate resolv.dnsmasq.conf via a jinja file ([e071db3a](https://github.com/YunoHost/yunohost/commit/e071db3aad8f3d9302cf73c9559c6ab117af4bc0))
[13:26:39]
<Yunohost Git/Infra notifications> [yunohost] Salamandar pushed to trixie: system: ignore mypy issue caused by Literal int return value ([550c32e5](https://github.com/YunoHost/yunohost/commit/550c32e577cd0be11e5eda7655650aa09872524a))
[13:30:07]
<Yunohost Git/Infra notifications> 🏗️ Starting build for yunohost/13.0.3+202512281430 for trixie/unstable/all...
[13:31:31]
<Yunohost Git/Infra notifications> ✔️ Completed build for yunohost/13.0.3+202512281430 for trixie/unstable/all.
[13:31:34]
<Yunohost Git/Infra notifications> ✔️ Completed distribution for yunohost/13.0.3+202512281430 for trixie/unstable.
[13:43:53]
<Yunohost Git/Infra notifications> [yunohost] Salamandar pushed to trixie: app_catalog: ignore crappy mypy error ([078e7474](https://github.com/YunoHost/yunohost/commit/078e74741846ffd35c5d8551f396e04eb2e5834c))
[13:45:04]
<Yunohost Git/Infra notifications> 🏗️ Starting build for yunohost/13.0.3+202512281445 for trixie/unstable/all...
[13:46:25]
<Yunohost Git/Infra notifications> ✔️ Completed distribution for yunohost/13.0.3+202512281445 for trixie/unstable.
[13:46:25]
<Yunohost Git/Infra notifications> ✔️ Completed build for yunohost/13.0.3+202512281445 for trixie/unstable/all.
[13:52:22]
<Yunohost Git/Infra notifications> [cli] Salamandar pushed to main: cli: rename cli list as cli list-servers ([de183e31](https://github.com/YunoHost/cli/commit/de183e31c6bff28509349463225a403320ca6113))
[13:52:23]
<Yunohost Git/Infra notifications> [cli] Salamandar pushed to main: Remplace server default with localhost ([b852cf43](https://github.com/YunoHost/cli/commit/b852cf439db44866bec10f0baa5cd3429fc4f521))
[13:52:54]
<Yunohost Git/Infra notifications> [cli] Salamandar pushed to main: Remplace server default with localhost ([c5316fa9](https://github.com/YunoHost/cli/commit/c5316fa9306ec46887f895809ed613790594ef7e))
[13:56:00]
<SveDec> Salut, merci pour le retour, à dispo pour discuter de ce point et faire évoluer la PR, à 4 mains éventuellement ?
[13:56:41]
<SveDec> J'ai vu que la CI est passée sur la PR et que le lint Python râle pour certains trucs, qui ne me parlent pas forcément excepté l'entrée `global_settings_setting_dns_custom_resolvers_enabled_help` qui n'est pas pas présente (je n'avais pas compris que c'était obligatoire, je me note de le rajouter à la PR dans les prochains jours) ; pour les autres points je vois sur une autre PR que le lint n'est pas passé non plus donc j'en déduis que ce n'est pas impératif pour fusionner.
[13:59:56]
<Yunohost Git/Infra notifications> [cli] Salamandar pushed to main: Print the error message properly when received fom the server ([d3ad0fe2](https://github.com/YunoHost/cli/commit/d3ad0fe2ca29bd2908066ae996138c44e07c8161))
[14:09:48]
<Salamandar> rebase ta branche, je viens de corriger quelques trucs
[14:09:56]
<Salamandar> ah et si tu peux viser la branche trixie au lieu de dev :D
[14:13:23]
<Yunohost Git/Infra notifications> [yunohost] SveDec edited [pull request #2246](https://github.com/YunoHost/yunohost/pull/2246): [enh] Add a user defined DNS resolvers setting
[14:25:21]
<SveDec> J'ai changé la cible de la PR, mais pour le *rebase* je ne vois pas comment le faire sur Github directement (je ne suis pas -encore- un pro de Github, ni d'ailleurs de `git` 😅 ), et je n'ai pas ma machine de dev sous la main avant mardi (sauf si je trouve une autre machine quelque part où pull depuis mon dépôt avant) :/
[14:27:32]
<SveDec> Pour les PR de type *nouvelle fonctionnalité* il faut donc privilégier la branche `trixie` jusqu'à la sortie ?
[14:29:23]
<Aleks (he/him/il/lui)> >Pour les PR de type nouvelle fonctionnalité il faut donc privilégier la branche trixie jusqu'à la sortie ?
muai je sais pas ça se discute, perso je dirais que ça dépends à quel point on peut intégrer la fonctionnalité rapidement
[17:03:01]
<Yunohost Git/Infra notifications> [Apps tools error] [List builder] Error while updating atuin: No cache yet for atuin
[17:25:04]
<Salamandar> `git checkout mabranche ; git rebase trixie` :)
[17:25:06]
<Salamandar> rien d'urgent
[17:42:54]
<m606> Hello, while working on the script that will autoupdate security.toml, I wonder why are different places described there ? https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/autoupdate_app_sources/autoupdate_app_sources.py#L202
Local = YNH infra machine running the script ?
Remote = only for dev purpose?
[17:48:36]
<Aleks (he/him/il/lui)> i'm thinking "local" means it acts on a local clone of of the repo, and "remote" means the actual remote git on github
[17:50:07]
<Aleks (he/him/il/lui)> cf https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/autoupdate_app_sources/autoupdate_app_sources.py#L114-L141
in the "local" case, it reads the manifest using `open("manifest.toml").read()` basically, and in the "remote" case it uses the pygithub(?) lib to interact with the github repo, for example `self.repo.get_contents("manifest.toml", ref=self.base_branch)` to get the manifest
[17:50:47]
<Aleks (he/him/il/lui)> cf examples for Pygithub here : https://pygithub.readthedocs.io/en/stable/examples/Repository.html#get-a-specific-content-file
[17:51:30]
<Aleks (he/him/il/lui)> ah or you mean in which case are we using which ?
[17:51:48]
<m606> yes
[17:52:32]
<m606> basically the security.toml related script is just downloading catalog.toml, security.toml and apps's manifest.toml. I was wondering whether it should support both local & remote scenarios as well?
[17:54:05]
<Aleks (he/him/il/lui)> apparently it boils down to here, https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/autoupdate_app_sources/autoupdate_app_sources.py#L779 , wether there are "apps" arguments provided when calling the scripts ... I think originally I had this mechanism to be able to run on specific app(s) to test if the autoupdater parameters are ok for this specific app, and hoping packagers would run it on their side when plugging parameters to validate that they are okay but in practice ... i don't think it's actually used ? but it's useful for debugging tho ...
[17:55:10]
<Aleks (he/him/il/lui)> but that's also because the autoupdater thingy has many parameters and needs to run a lot of queries, maybe it's less the case for apps since they only need to specify a CPE ?
[17:56:58]
<Aleks (he/him/il/lui)> it can still be iterated upon later, idk, to me i would focus on making a PR with what you have so far, like what does it produces if you run it on all the apps that provide a CPE
[18:00:36]
<m606> ok I was thinking that maybe on YNH infra there was a local mirror of GH repos, and that script running there would just need to hop from dir to dir instead of using GH API, but if that's what you say I believe it doesn't matter for me here.
[18:01:06]
<Yunohost Git/Infra notifications> [Apps tools error] [List builder] Error while updating atuin: No cache yet for atuin
[18:01:38]
<m606> the script is providing an option to be run only on a selected apps for debug ofc
[18:03:01]
<Aleks (he/him/il/lui)> yes, there is a local app cache (the script handling the app cache maitnenance is https://github.com/YunoHost/apps_tools/blob/main/app_caches.py and can be used locally too) but the point of the autoupdater is to create PR on each repo so it's easier to just "do the manifest.toml edits directly on the github repo + create a PR from it" rather than "create a new branch locally, do the edit locally on manifest.toml, commit/push, make the PR, cleanup the local cache of the new branch we created"
[18:03:02]
<m606> it is for soon - was kind of ready until I noticed raw.githubusercontent.com was applying rate limiting... so I just need to change the way to download toml file with Github API
[18:05:20]
<Aleks (he/him/il/lui)> for the security.toml thingy we "only" want to read the CPE info from all manifests and update the security.toml so there's less repo juggling and wether we want to read "local manifests" or "remote manifests" is mainly about speed vs relying on a local cache (e.g. if we run on a local cache, we have to have the local app cache setup but then it's basically immediate to read the manifest.toml - but if we read the manifest.toml remotely, we have no need for cache maintenance, but we need API keys and it needs many queries under the hood)
[18:07:00]
<m606> ```
INFO:root:Starting to check for new vulnerabilities for 639 apps.
INFO:root:Estimated time of execution for this script is ~117 minutes.
```
[18:07:19]
<m606> maybe it can be optimized later idk
[18:08:02]
<Aleks (he/him/il/lui)> 🙀
[18:08:55]
<Aleks (he/him/il/lui)> urrrgh yeah if you just fetching manifest.toml using some basic `requests.get` on `raw.githubusercontent.com` i really think you want to look into using the local cache or the pygithub lib (with a token) ...
[18:09:31]
<Aleks (he/him/il/lui)> didn't think requests on `raw.githubusercontent.com` would pile up to 117 minutes, would have more expected like 5ish minutes...
[18:10:10]
<Aleks (he/him/il/lui)> or maybe it's poking github AND poking the NIST API ?
[18:10:22]
<m606> well it's not raw.githubusercontent.com which takes the most
[18:13:00]
<m606> `total_time_sec = (6 + 1 + 4) * apps_number`
where:
6 = NIST instructions without API key (with API key it can be reduced down to 1)
1 = EUVD (actually they don't limit anything so far, but 1s sleep just to avoid potential errors)
4 = is a margin because when doing tests on a few apps (with raw.github...) it took approx this time more than API-related sleep time
[18:13:54]
<Aleks (he/him/il/lui)> maybe you want to parallelize those then but i don't know if NIST / EUVD have rate limits too
[18:14:18]
<m606> so it could be 65mn in total with NIST API key already
[18:16:03]
<m606> several requests from same IP ? NIST without API key it would fail, as for other cases idk
[18:16:34]
<Aleks (he/him/il/lui)> eg this is how the app catalog logo fetching is parallelized: https://github.com/YunoHost/yunohost/blob/dev/src/app_catalog.py#L253-L272
in particular just calling `ThreadPool(8).imap_unordered(function_to_apply, list_to_iterate_upon)` runs 8 threads in parallel, each time calling the function to one item of the list
[18:17:45]
<Aleks (he/him/il/lui)> but yeah i suppose we'll want to use API keys for NIST and EUVD if that helps speed things up because 120 minutes is super long @_@
[18:18:15]
<Aleks (he/him/il/lui)> even the app source autoupdate doesn't take that long 😬
[18:19:20]
<m606> yes it's free, just need maybe an official YNH contact email address - https://nvd.nist.gov/developers/request-an-api-key
[18:19:34]
<m606> the bottleneck is only NIST, EUVD does not have API key system
[18:22:45]
<m606> so multiprocessing is making time savings out of the requests initializations, right ? Or a bandwith limiting issue?
[18:24:13]
<Aleks (he/him/il/lui)> mmm basically one of the usecase of multithreading (in particular in Python considering the GIL story prevents from actual CPU multithreading?) is when your process is gonna spend a lot of time waiting on I/O, in this case waiting for the response from the servers
[18:25:19]
<Aleks (he/him/il/lui)> of course that assumes there isn't a ratelimit that would be too harsh on the distant server otherwise it just gonna stop answering requests at some point
[18:28:35]
<m606> ok thanks i'll try
[18:32:13]
<m606> just wondering as well why makes you have
```
from multiprocessing.pool import ThreadPool
import requests
```
in the function rather than at the top of the file? in case the file is imported to use other functions but not this one ?
[18:35:39]
<m606> and should I rather use pygithub or a simple API implemention as such https://stackoverflow.com/questions/9272535/how-to-get-a-file-via-github-apis#answer-70136393 ?
[18:36:34]
<Aleks (he/him/il/lui)> ah yeah we're lazyloading the `requests` library in the code at several places because ... for some reason it gets several seconds i think to run just `import requests` on low end hardware like RPi 2 or so, so you would "loose" this time every time you run any `yunohost` command even if requests is not actually used ...
[18:36:51]
<Aleks (he/him/il/lui)> and for multiprocessing, idk, maybe i just got lazy when writing this
[18:37:37]
<Aleks (he/him/il/lui)> but nowadays i'm trying to be careful with global imports in the context of YunoHost for that reason even though ofc in other regular context you'd want the imports to be global
[18:41:49]
<m606> any call on the pygithub vs. simple python implementation?
[18:42:33]
<m606> not sure it matters a lot here, but i tend to avoid deps for small things
[18:44:54]
<Aleks (he/him/il/lui)> hmmmm idk, hitting `raw.githubusercontent.com` feels weird if what you want it read the manifest of every app ... i mean on the actual infrastructure we have both a) a local cache of every app in the catalog and b) we already use pygithub elsewhere with tokens etc for that kind of thing
[18:45:52]
<m606> no this hits api.github.com https://stackoverflow.com/questions/9272535/how-to-get-a-file-via-github-apis#answer-70136393
[18:46:17]
<m606> ah ok
[18:46:20]
<Aleks (he/him/il/lui)> hmokay
[18:46:22]
<m606> so i'll use pytgithub
[18:46:25]
<m606> if already used...
[18:48:19]
<Aleks (he/him/il/lui)> querying `api.github.com` is fine too assuming you're using a token to not hit the rate limit
[18:48:57]
<Aleks (he/him/il/lui)> i mean urrrgh idk i havent though deeply on this, i think the fastest thing would be to just read manifests from the local cache
[18:49:05]
<Aleks (he/him/il/lui)> honestly try setting up the local cache on your machine it's not that long i think
[18:49:16]
<Aleks (he/him/il/lui)> lemme check the right command
[18:51:28]
<Aleks (he/him/il/lui)> should be something like `python3 app_caches.py -j8 -l path/to/apps/repo -c apps_cache/`, with `apps_cache` being the folder where every app repo will be cloned/updated (i suppose it'll be created if it doesn't exists), and `path/to/apps/repo` is where you have a clone of https://github.com/YunoHost/apps/
[18:51:54]
<Aleks (he/him/il/lui)> and `-j8` is to parallelize the whole thing on 8 workers/process
[18:52:15]
<Aleks (he/him/il/lui)> should take something like, idk, 15-30 minutes ?
[18:52:42]
<Aleks (he/him/il/lui)> the progress bar will indicate the remaining time
[18:55:05]
<m606> so the general idea is to install the cache, and then look for content in there. and the script is run every, [update](https://github.com/YunoHost/apps_tools/blob/083361f4fd13b1faf36fcbc2ebd55db64562fc1e/app_caches.py#L86) the cache first and then run the script ?
[18:55:09]
<Aleks (he/him/il/lui)> yeah, on the infrastructure the cache is updated regularly
[18:55:25]
<Aleks (he/him/il/lui)> and when testing/debugging you don't necessarily care about having a super-up-to-date cache
[18:55:42]
<m606> oh yes, i'm more thinking for production
[18:55:55]
<Aleks (he/him/il/lui)> i mean it's not the "security.toml update" script's job to update the cache
[18:56:23]
<m606> for testing I don't mind if the scripts take 2h to run (actually i run it on a small batch only)
[19:00:00]
<m606> ok i mean if the script is run every X days on infra in prod, at least security.toml (ideally also catalog.toml and manifest.toml) should be updated before to have a clean process.
So should I use rawgithubcontent just of it, and cache for others ?
[19:01:37]
<m606> or will infra cache policy will somehow be aligned with script periodicity?
[19:01:39]
<Aleks (he/him/il/lui)> ah you mean "how to obtain the catalog / apps.toml" ? yeah on the infrastructure there is also on clone of the whole apps repo with the catalog etc
[19:02:02]
<Aleks (he/him/il/lui)> the cache on the infra is used for many things, it's updated like every 2 hours, something along those lines
[19:02:17]
<m606> ah ok!
[19:03:11]
<Aleks (he/him/il/lui)> it's probably a good idea to use the same argparse structure as for the other script such as the autoupdate_app_sources, cf this line https://github.com/YunoHost/apps_tools/blob/main/autoupdate_app_sources/autoupdate_app_sources.py#L791 which in fact adds "standard" arguments to specify where the app cache lives and where the "app catalog" lives
[19:03:35]
<Aleks (he/him/il/lui)> https://github.com/YunoHost/apps_tools/blob/main/appslib/get_apps_repo.py#L24
[19:04:06]
<m606> well i did that for other reason, but hadn't a clear view on what was that APP_CACHE folder
[19:04:27]
<m606> ok thanks
[19:05:57]
<Yunohost Git/Infra notifications> [Apps tools error] [List builder] Error while updating atuin: No cache yet for atuin
[19:06:27]
<Aleks (he/him/il/lui)> ^ that's the cache updating 😬
[19:07:18]
<Aleks (he/him/il/lui)> or rather the json catalog at https://app.yunohost.org/default/v3/apps.json (fetched by the yunohost servers) being rebuilt but missing the cache for atuin
[19:17:29]
<Thomas> > <@Alekswag:matrix.org> ^ that's the cache updating 😬
Usually it's when the branche = main is not set in apps.toml
[19:18:24]
<m606> https://paste.yunohost.org/iqimemibig.text
To update I just rerun the same cmd ?
[19:20:20]
<Aleks (he/him/il/lui)> urrrrghuuu
[19:20:35]
<Aleks (he/him/il/lui)> are we missing branch = main for 4 other apps and it went unnoticed or something
[19:24:30]
<m606> i can send a PR if you want
[19:24:42]
<m606> just patched it locally to update the cache successfully
[19:35:20]
<Aleks (he/him/il/lui)> hmmmmm gotosocial in fact does have "branch = main"
[19:35:54]
<Aleks (he/him/il/lui)> 🤔
[19:38:54]
<m606> I had not updated my fork 🫢
[19:39:13]
<m606> so the git pull was not at the actual latest point
[20:45:01]
<m606> Now I'm trying to use `get_catalog()` but [this line](https://github.com/YunoHost/apps_tools/blob/083361f4fd13b1faf36fcbc2ebd55db64562fc1e/appslib/utils.py#L11) fails because it wants to find `apps.toml`in `basepath/apps_tools` instead of `basepath/apps` when the script is run from, say, `basepath/apps_tools/vuln/vuln.py` Actually I even wonder how it can works for `autoupdate_app_sources` which lays in `basepath/apps_tools/autoupdate_app_sources/autoupdate_app_sources.py` ?
[20:46:06]
<Aleks (he/him/il/lui)> hmm
[20:46:22]
<Aleks (he/him/il/lui)> supposedly that var is changed via `set_apps_path()` at some point
[20:46:52]
<m606> but it doesn't work. contrary to this one which works fine: https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/appslib/get_apps_repo.py#L45
[20:49:10]
<m606> as i import `get_apps_repos.py` (and do `get_apps_repo.add_args(parser)`) I have this arg available : https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/appslib/get_apps_repo.py#L30
[20:53:03]
<m606> so I should import `set_app_path` and run it? or should we rather add something like this https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/appslib/get_apps_repo.py#L82-L84 so that the `-l` arg works as I would expect it does ? https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/appslib/get_apps_repo.py#L30
[20:54:20]
<Aleks (he/him/il/lui)> eeeh zblerg i don't know exactly, didn't write all this thing with the arg parsing and cache mechanism and all that, could be a bug idk 😵💫
[20:56:43]
<m606> ok
[21:12:49]
<m606> @Salamandar:matrix.org would you by chance remember how this argument is meant to be used ? https://github.com/YunoHost/apps_tools/blob/f3e89ec927acb907d3b891d725f910e037120887/appslib/get_apps_repo.py#L30
[21:19:45]
<Salamandar> hmmmm lemme check
[21:20:24]
<Salamandar> yes so it's a bit of a clusterfuck and it's kinda my fault here
[21:21:01]
<Salamandar> you either have a local copy of the `https://github.com/yunohost/apps` repository on your computer or you don't
[21:21:55]
<Salamandar> if you have a local copy, pass `--apps-dir ../apps`
if you don't, pass `--apps-repo https://github.com/yunohost/apps` and the tool will clone the repo for you
[21:22:19]
<Salamandar> the idea was to mimic the behaviour of our old tools in the yunohost infra
[21:22:27]
<Salamandar> but maybe now we want to always have a local copy, idk
[21:25:44]
<Salamandar> Actually i can't find any use of --apps-repo by grepping my local copy of ynh-apps or ynh repos… so maybe it's not used anymore?
[21:30:14]
<Salamandar> uuuuh
[21:30:19]
<m606> i would use it I could 😁
[21:30:31]
<m606> so yeah i am trying to use a local copy which is at `basepath/apps` when the script is run at `basepath/apps_tools/vuln/vuln.py`. So I run: `python update_vulnerabilities_database.py -a gogs -c ../../apps_cache -l ../../apps -w` (i have imported the files in that script and ran `get_apps_repo.add_args(parser)` for these args to be taken into account). But the script fails at finding the file at `basepath/app_tools/apps.toml`. It looks at the wrong place, due I think to `REPO_APPS_ROOT` (defined in `utils.py`)
[21:30:58]
<Salamandar> let me try
[21:31:05]
<m606> note that app_cache arg works well on the contrary
[21:31:39]
<Salamandar> do you have a branch name ?
[21:33:02]
<m606> not yet, let me send you a small demo script
[21:33:08]
<m606> 1mn
[21:33:52]
<Salamandar> you need to call another function after parsing args
[21:34:08]
<Salamandar> `apps_dir = get_apps_repo.from_args(args)`
[21:34:12]
<Salamandar> this is the function that actually does the clone if required
[21:36:34]
<Salamandar> the reason of this "clone" behaviour is that some tools will actually edit / commit the contents of the apsp repository and you might not want a local copy on your computer to be edited as such
[21:40:56]
<m606> ok thanks. Well i did call it through `get_apps_repo.cache_path(args)` which calls in turn from_args() but it didn't work
[21:41:13]
<m606> but while preparing this PoC for you, I see it does work
[21:41:29]
<m606> https://aria.im/_bifrost/v1/media/download/AeB9UC6Ao4Rw8vkcLAqXj3QVs5fgBpBt_ODxdBW9HyrTE2DkLKtkKHqFMxRLB28yiGAG5Xwoh2MeVB-23XtaDNpCebbA8iSAAG1hdHJpeC5vcmcvbVd6cm5wYmh3aG5FWHVDWUJ3Zkhlc1p6
[21:41:39]
<m606> so my issue is elsewhere, sorry
[21:41:52]
<Salamandar> there is an issue with your upload
[21:41:58]
<Salamandar> `{"errcode":"M_NOT_FOUND","error":"Not found '/_matrix/client/v1/media/download/matrix.org/mWzrnpbhwhnEXuCYBwfHesZz'"}`
[21:43:41]
<m606> https://aria.im/_bifrost/v1/media/download/AcmMTexPoq4mbG0XykgykXYiVk1oLf7b6fYjmoO6aG5KGERbQ8hfk_AFXIb0qvNiDGNFEEskGcofRZSNdHQLmUVCebbBEjtAAG1hdHJpeC5vcmcvTGRHTFppUXBWdUdpcWdHcGVEU3BpUGhn
[21:44:56]
<m606> you should ideally run it from apps_tools/whaterver/test.py -c path/to/apps_cache -l path/to/apps
[21:45:03]
<Salamandar> well that works for me
[21:45:05]
<Salamandar> https://aria.im/_bifrost/v1/media/download/AXl5vrMaNZBzmkgvNRwFLowVZAZiy6_FVvmFSbwF1NbrEdCcN7ULy1Vh5jdFXS-6V4Yn3DNcFJtYxmYq0vQ3B1lCebbBJt9QAG1hdHJpeC5vcmcvaGZXSkZOUUtyR2ZQcElUcFl3VWlLRXZS
[21:45:20]
<Salamandar> toto™
[21:47:44]
<m606> yes that's what i was telling you just above 😁 but it does not work in my large script for some reason.
```py
cache_path = get_apps_repo.cache_path(args)
print(cache_path)
print(APPS_REPO_PATH)
```
gives
```
File "/basepath/apps_tools/update_vulnerabilities_database/update_vulnerabilities_database.py", line 795, in main
print(APPS_REPO_PATH)
^^^^^^^^^^^^^^
NameError: name 'APPS_REPO_PATH' is not defined
```
[21:48:07]
<m606> but that's probably another story then, i'll check it out. thanks
[21:48:21]
<m606> yes that's what i was telling you just above 😁 but it does not work in my large script for some reason.
```py
cache_path = get_apps_repo.cache_path(args)
print(cache_path)
print(APPS_REPO_PATH)
```
gives
```
File "/basepath/apps_tools/update_vulnerabilities_database/update_vulnerabilities_database.py", line 795, in main
print(APPS_REPO_PATH)
^^^^^^^^^^^^^^
NameError: name 'APPS_REPO_PATH' is not defined
```
[21:49:08]
<Salamandar> You gotta import it from a lib i guess
[21:51:40]
<Salamandar> ah but
[21:51:49]
<Salamandar> `APPS_REPO_PATH` i think is an internal stuff
[21:51:56]
<Salamandar> for `get_apps_repo.py`
[22:00:51]
<m606> hmm and when I import it I can't inherit its globals?
[22:01:53]
<m606> but this makes it work `get_apps_repo.set_apps_path(args.apps_dir)`
[22:04:06]
<m606> although it should rather be `utils.set_apps_path(args.apps_dir)` 😁
[22:04:24]
<Salamandar> so you should not access this variable
[22:04:33]
<Salamandar> you can but it doesn't make sense for you to use this variable
[22:05:44]
<m606> indeed, i am happy enough with this!
[22:05:59]
<Salamandar> ah ah
[22:06:05]
<Salamandar> TBH i should destroy this variable
[22:06:11]
<Salamandar> do something with @cache maybe
[22:06:30]
<Salamandar> or a callable class idk
[22:07:16]
<Salamandar> but this global variable is only here as a cache
[22:08:09]
<Salamandar> and FYI global is only a keyword to inform python this variable is not local to the function scope but in the global scope when assigning
[22:11:05]
<m606> so when you import script B from script A, script A cannot see the globals defined in script B ?
[22:11:19]
<Salamandar> well
[22:12:01]
<m606> that was my bet, although experimental anyway
[22:12:12]
<Salamandar> you either do
```
from something import THE_GLOBAL_VAR
# or from something import *
print(THE_GOLBAL_VAR)
```
or
```
import something
print(something.THE_GLOBAL_VAR)
```
[22:12:15]
<Salamandar> standard python stuff i guess :)
[22:12:26]
<Salamandar> the global keyboard doesn't change the behaviour of this
[22:15:26]
<m606> yes ok, I had forgotten the print(`something.`GLOBAL_VAR)
[22:18:31]
<Salamandar> :)
[22:18:54]
<Salamandar> if you want to refactor the apps_tools repository, re-think how our infra scripts work, feel free
[22:19:29]
<orhtej2> my `+1337 -666` checkout disapproves
[22:19:57]
<Salamandar> for now we have multiple clones of the apps directories (not the `apps` repo i mean), one per tool, and that's a bit crappy. The only tool that really requires a separate clone should be the autopatch tool
[22:20:03]
<orhtej2> (with a bunch of `print('here')` for debugging wtf is the autoupdate complaining about)
[22:21:00]
<Salamandar> https://aria.im/_bifrost/v1/media/download/AUiGX5D6Q-pTWSAAXtdVTtspDz8tO8K5FLUsAzw3EpG3q8AdhNw4pZjqJDuLqovdut6yhpYSPKUcDw_GNwlf2oRCebbDNOEQAG1hdHJpeC5vcmcvb3ZCU1hmSWhTcmNyamRiV3VaWmZORmxi
[22:21:08]
<Salamandar> too bad it wasn't -1312
[22:21:45]
<orhtej2> sadly, it was too leet
[22:27:31]
<m606> right now i will just be adding a script )