Add README and TODO note on limit input

This commit is contained in:
Commit Report Sync Bot 2025-03-02 00:08:31 -08:00
parent 5a705e93c6
commit 0198b4e385
2 changed files with 3 additions and 1 deletions

View File

@ -18,7 +18,7 @@ This is the code for a [Gitea Action](https://docs.gitea.com/usage/actions/overv
Explicitly ok - you _can_ run this on a target repo containing no commit, it's not necessary to create an awkward `README.md`-only commit to "prime" the repo[^could-create].
Not a prerequisite, but a practicality note - I _think_ this will has a runtime[^no-network] which is quasi-quadratic - i.e. `O(<number_of_commits_requested_to_sync_from_source_repo> * <number_of_commits_in_target_repo_after_that>)`. Ideally, you should limit to only syncing a limited number of commits - if you trigger it on _every_ push of the source repo, it'll be kept up-to-date from there on. That does mean that you won't get any retroactive history, though - so, by all means _try_ a one-off `limit: 0` to sync "_everything from all time in the source repo_"; but if that times-out, maybe use Binary Search to figure out how many commits would complete in whatever your runtime limitation is, then accept that as your available history? Or temporarily acquire a runtime executor with more runtime (that's fancy-speak for - instead of running it on your CI/CD Platform, run it on a developer's workstation and leave it on overnight :P)
Not a prerequisite, but a practicality note - I _think_ this will has a runtime[^no-network] which is quasi-quadratic - i.e. `O(<number_of_commits_requested_to_sync_from_source_repo> * <number_of_commits_in_target_repo_after_that>)`. Ideally, you should limit to only syncing a limited number of commits - if you trigger it on _every_ push of the source repo, it'll be kept up-to-date from there on. That does mean that you won't get any retroactive history, though - so, by all means _try_ a one-off `limit: 0` to sync "_everything from all time in the source repo_" (TODO - haven't actually implemented that behaviour yet!); but if that times-out, maybe use Binary Search to figure out how many commits would complete in whatever your runtime limitation is, then accept that as your available history? Or temporarily acquire a runtime executor with more runtime (that's fancy-speak for - instead of running it on your CI/CD Platform, run it on a developer's workstation and leave it on overnight :P)
If a target-repo gets so unreasonably large that even the runtime of syncing it from a single commit is too high, then I guess you could trigger writing to different target repos from each source repo in some hash-identified way - but, as I've said many times already in this project, YAGNI :P

View File

@ -3,6 +3,8 @@
- [ ] Remove `parentHashes`, never ended up being needed
- [ ] Add a link to the original commit from the body of the file that's created in the target repo, and/or in the
commit body.
- [ ] Allow passing a `limit` variable to control how many source commits to read
- [ ] allow passing 0 to indicate all
- [ ] Flesh out the `README.md` just before pushing with a description of what this tool is
- Doing it just before push, rather than on first creation, so that it will be added even to target repositories that
have already been initialized with this tool