This script processes GitHub backport PRs, checking for proper cross-linking
with a Redmine Backport tracker issue and, if a PR is merged and properly
cross-linked, it can optionally resolve the tracker issue and correctly
populate the "Target version" field.
The script takes a single positional argument, which is optional. If the
argument is an integer, it is assumed to be a GitHub backport PR ID (e.g. "28549").
In this mode ("single PR mode") the script processes a single GitHub backport
PR and terminates.
If the argument is not an integer, or is missing, it is assumed to be a
commit (SHA1 or tag) to start from. If no positional argument is given, it
defaults to the tag "BRI", which might have been added by the last run of the
script. This mode is called "scan merge commits mode".
In both modes, the script scans a local git repo, which is assumed to be
in the current working directory. In single PR mode, the script will work
only if the PR's merge commit is present in the current branch of the local
git repo. In scan merge commits mode, the script starts from the given SHA1
or tag, taking each merge commit in turn and attempting to obtain the GitHub
PR number for each.
For each GitHub PR, the script interactively displays all relevant information
(NOTE: this includes displaying the GitHub PR and Redmine backport issue in
web browser tabs!) and prompts the user for her preferred disposition.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
* move `for_make_check` to the beginning of script, as FreeBSD will also
use this variable
* extract `preload_wheels_for_tox()` function out to improve readability
* call `preload_wheels_for_tox()` only if `for_make_check` is true
Signed-off-by: Kefu Chai <kchai@redhat.com>
When --resolve-parent is provided on the command line, the script
will check the status of backport issues for those parent issues
all of whose backport issues already exist. If all the backport
issues are in status "Resolved", the parent issue's status is set
to "Resolved" as well.
Signed-off-by: Nathan Cutler <ncutler@suse.com>
set parent scope variables in the same shell.
foo | while read ....
in the above statement, `while read ...` is executed in a subshell. so
it cannot change the bash variables in its parent shell.
in this change, the output of `foo` is redirected to the stdin of `while
read` statement. so we can capture the test failures.
before this change, the test always succeed, even if there are failures.
Signed-off-by: Kefu Chai <kchai@redhat.com>
check-generated.sh tries to verify the invariance of a type after
encoding and decoding. so we should keep the value of `DecayCounter` the
same when encoding/decoding/dumping.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* partially revert 5c25a018. which is not backward compatible.
* change `ServiceMap::get_daemon()` so it returns a
`pair<Daemon*,bool>`.
git is an `optional<uint64_t>`, so we cannot dump it without checking.
Fixes: https://tracker.ceph.com/issues/41424
Signed-off-by: Kefu Chai <kchai@redhat.com>
we should update explicit loc and location even if
`ofs >= manifest->obj_size`. there is chance that we are updating an end
iterator whose ofs is equal to obj_size. before being updated, the end
iterator points to an implicit location, while after being updated, the
manifest could be using an explicit location, so we should update the
end iterator as well.
Fixes: https://tracker.ceph.com/issues/41416
Signed-off-by: Kefu Chai <kchai@redhat.com>
* remove the init() member function which is solely used by the
constructors.
* use constructor delegation to minimize the repeating
Signed-off-by: Kefu Chai <kchai@redhat.com>
we should update the end iterator when updating RGWObjManifest with its
parts
Fixes: https://tracker.ceph.com/issues/41416
Signed-off-by: Kefu Chai <kchai@redhat.com>
otherwise we will fail to install the build dependencies of
`lazy-object-proxy` from the wheelhouse. as `lazy-object-proxy` does not
add `setuptools_scm` in its `setup.py`, instead it lists
`setuptools_scm` in `setup.cfg` and `pyproject.toml` as a `build-system`
requires. but unfortunately, `pip download` only downloads the
install/run-time dependencies at this moment. and `lazy-object-proxy`
does not offer binary package for at least python2.7.
ideally, `pip download` should collects its dependencies like
Collecting setuptools_scm>=3.3.1 (from lazy-object-proxy->astroid<3,>=2.2.0->pylint->-r requirements-lint.txt (line 1))
so we need to use `pip wheel` do download build-time dependencies
see also https://github.com/pypa/pip/issues/6222
Signed-off-by: Kefu Chai <kchai@redhat.com>
common/condition_variable_debug: do not assert() if sloppy
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Willem Jan Withagen <wjw@digiware.nl>
mgr/BaseMgrModule: use PyInt_Check() to compatible with py2
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Sebastian Wagner <swagner@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Abhishek Lekshmanan <abhishek@suse.com>
For unfound objects, we might append LOST_REVERT log entries,
which shall allow these objects to be reverted to the newest
available version later.
However, we are currently lack of support to rewind the clean_regions
portion too when marking unfound objects as lost with inc-recovery mode
enabled. Hence we must mark these unfound-revert objects as fully dirty
to make sure they can be correctly recovered.
E.g.,:
- primary is pulling object A from replica 1
- object A is corrupted on replica 1
- object A is now unfound
- mark object A as lost, replica 1 will persist a wrong
missing item for object A..
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
There is no consumer.
Actually, I think this field is only meaningful to be used
to indicate whether we should initiate an inc-recovery or not.
If not, then we shall fall back to triggering a full-recovery
instead.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
In general we shall build missing set (and hence clean_regions)
based on pg log. However, currently there are still 5 cases we might call
missing.add to add a new pg_missing_item into the missing set
explicitly (or replace an existing pg_missing_item entirely):
1. we explicitly build missing set on startup, in which case
we know we are trying to be compatiable with pre-kraken versions,
so it should be ok for us to disable inc-recovery.
2. we are currently processing authoritative log, and there are
some divergent objects detected. For simplicity (and correctness),
we should disable inc-recovery entirly for these objects.
3. we are re-building missing set, e.g., due to the global
CEPH_OSDMAP_RECOVERY_DELETES policy changing.
In this case we know we are at the end of upgrading from a
pervious version that is lack of CEPH_OSDMAP_RECOVERY_DELETES support.
Hence it should be the recommended option to disable inc-recovery
simultaneously since these objects should be lack of inc-recovery support
too.
4. we are adding or re-adding missing object into primary's missing_loc.
It doesn't matter whether we have a correct clean_regions there
since we never actually refer to that field from missing_loc
when we actually start to perform object recovery later.
5. we are auto-repairing a corrupted object and hence the need of
adding it to the corresponding missing set first, e.g, by leveraging
the existing recovery procedure. In this case, we always disable
inc-recovery to make sure this object can be fully (and correctly)
recovered later.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>