From 0198b4e385cae5b5c82c21bbe4d1005a12bdd0be Mon Sep 17 00:00:00 2001 From: Commit Report Sync Bot Date: Sun, 2 Mar 2025 00:08:31 -0800 Subject: [PATCH] Add README and TODO note on limit input --- README.md | 2 +- TODO.txt | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index e90cf63..44693f2 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ This is the code for a [Gitea Action](https://docs.gitea.com/usage/actions/overv Explicitly ok - you _can_ run this on a target repo containing no commit, it's not necessary to create an awkward `README.md`-only commit to "prime" the repo[^could-create]. -Not a prerequisite, but a practicality note - I _think_ this will has a runtime[^no-network] which is quasi-quadratic - i.e. `O( * )`. Ideally, you should limit to only syncing a limited number of commits - if you trigger it on _every_ push of the source repo, it'll be kept up-to-date from there on. That does mean that you won't get any retroactive history, though - so, by all means _try_ a one-off `limit: 0` to sync "_everything from all time in the source repo_"; but if that times-out, maybe use Binary Search to figure out how many commits would complete in whatever your runtime limitation is, then accept that as your available history? Or temporarily acquire a runtime executor with more runtime (that's fancy-speak for - instead of running it on your CI/CD Platform, run it on a developer's workstation and leave it on overnight :P) +Not a prerequisite, but a practicality note - I _think_ this will has a runtime[^no-network] which is quasi-quadratic - i.e. `O( * )`. Ideally, you should limit to only syncing a limited number of commits - if you trigger it on _every_ push of the source repo, it'll be kept up-to-date from there on. That does mean that you won't get any retroactive history, though - so, by all means _try_ a one-off `limit: 0` to sync "_everything from all time in the source repo_" (TODO - haven't actually implemented that behaviour yet!); but if that times-out, maybe use Binary Search to figure out how many commits would complete in whatever your runtime limitation is, then accept that as your available history? Or temporarily acquire a runtime executor with more runtime (that's fancy-speak for - instead of running it on your CI/CD Platform, run it on a developer's workstation and leave it on overnight :P) If a target-repo gets so unreasonably large that even the runtime of syncing it from a single commit is too high, then I guess you could trigger writing to different target repos from each source repo in some hash-identified way - but, as I've said many times already in this project, YAGNI :P diff --git a/TODO.txt b/TODO.txt index 26ddd41..9245230 100644 --- a/TODO.txt +++ b/TODO.txt @@ -3,6 +3,8 @@ - [ ] Remove `parentHashes`, never ended up being needed - [ ] Add a link to the original commit from the body of the file that's created in the target repo, and/or in the commit body. +- [ ] Allow passing a `limit` variable to control how many source commits to read + - [ ] allow passing 0 to indicate all - [ ] Flesh out the `README.md` just before pushing with a description of what this tool is - Doing it just before push, rather than on first creation, so that it will be added even to target repositories that have already been initialized with this tool