This reverts commit 5852ddb6a2.
The __jump_table section is more complex than the initial analysis
determined. The __jump_table has three relocs per entry that must
be pulled in together and one of the relocs is to symbols contained
in the __tracepoints section whose rela section references the
__tracepoint_strings section. So it's more complex and should just
fail rather than appear that it is being handled properly.
Signed-off-by: Seth Jennings <sjenning@redhat.com>
Almost a line-for-line copy/paste of the smp locks function. The only
differences are the section name, and an offset increment of 8 instead
of 4.
Fixes#157.
If a patch changes a single function which is in a special section that
we don't support, create-diff-object reports "no changed functions were
found". Give a clearer error message in that case, by checking
reachability errors before unchanged errors and by printing all
reachability errors errors instead of the first one it encounters.
Fixes#150.
At this point the module does build (i.e. kpatch-build is correct);
however, the addresses in the generated vmlinux don't match that
of the running kernel so the modules fail to load with an ftrace
registration error. So that is something to be investigated.
Signed-off-by: Seth Jennings <sjenning@redhat.com>
During the test whether the patch applies, if it partially applies, the
patch utility returns an error but leaves the source tree in a partially
patched state. Use --dry-run instead.
No need to accumulate errors if the load or unload fails. Leaving the
testprog failure non-fatal so that the test will then call unload to
clean up after itself.
This is a basic integration test framework for kpatch, which tests
building, loading, and unloading patches, as well as any other related
custom tests.
The kpatch-test script looks for test input files in the
tests/integration directory. It expects certain file naming
conventions:
- foo.patch - patch that should build successfully
- bar-FAIL.patch - patch that should fail to build
- foo-LOADED.test - executable which tests whether the foo.patch module
is loaded. It will be used to test that loading/unloading the patch
module works as expected.
Any other *.test files will be executed after all the patch modules have
been built from the *.patch files. They can be used for more custom
tests above and beyond the simple loading and unloading tests.
I just have one test here, but many more to come eventually. I'm
constantly doing manual testing of patches and am planning on automating
them with this framework.
Currently the patch module calls kpatch_unregister in the patch module
exit path. If the activeness safety check fails in kpatch_unregister,
it's too late for the patch module to stop exiting, so all it can do is
panic.
Prevent this scenario by requiring the user to disable the patch module
via sysfs before allowing the module to be unloaded. The sysfs write
will fail if the activeness safety check fails. An rmmod will fail if
the patch is still enabled.
Also add support for this new unloading model in "kpatch unload".
Following in the same solution, regenerate [.rela].parainstructions
sections if table entries contain relocations that reference changed
functions (if any).
Fixes#135
Signed-off-by: Seth Jennings <sjenning@redhat.com>
The initial commit had a bug where the offset field of the
.rela.smp_locks entries was not updated to reflect the correct
offset in the truncated .smp_locks section.
Signed-off-by: Seth Jennings <sjenning@redhat.com>
This commit uses the same approach as the bug table support,
mangling the .smp_locks and .rela.smp_locks sections so that
they only contain entries for changed functions (if any).
Fixes#107
Signed-off-by: Seth Jennings <sjenning@redhat.com>
When compiling core.c, it may report error like:
"error: implicit declaration of function ‘in_nmi’"
Adding header file in_nmi defined could avoid this.
Signed-off-by: Jincheng Miao <jincheng.miao@gmail.com>
Put funcs, num_funcs, and mod in their own struct called kpatch_module.
This allows us to keep patch module specific variables in one place (and
we'll have more of these variables soon).
Support live patching on NMI handlers. This adds checks for
possible inconsistency of live patching on NMI handlers.
The inconsistency problem means that any concurrent execution
of old function and new function, which can lead unexpected results.
Current kpatch checks possible inconsistency problem with
stop_machine, which can cover only threads and normal interrupts.
However, beacuse NMI can not stop with it, stop_machine is not
enough for live patching on NMI handlers or sub-functions which are
invoked in the NMI context.
To check for possible inconsistency of live patching on those
functions, add an atomic flag to count patching target functions
invoked in NMI context while updating kpatch hash table. If the
flag is set by the target functions in NMI, we can not ensure
there is no concurrent execution on it.
This fixes the issue #65.
Changes from v5:
- Fix to add a NULL check in kpatch_get_committed_func().
Changes from v4:
- Change kpatch_operation to atomic_t.
- Use smp_rmb/wmb barriers between kpatch_operation and kpatch_status.
- Check in_nmi() first and if true, access kpatch_operation.
Changes from v3:
- Fix kpatch_apply/remove_patch to return 0 if succeeded.
Changes from v2:
- Clean up kpatch_get_committed_func as same style of kpatch_get_func.
- Rename opr to op in kpatch_ftrace_handler.
- Consolidate in_nmi() and kpatch_operation check into one condition.
- Fix UNPATCH/PATCH mistype in kpatch_register.
Changes from v1:
- Rename inconsistent_flag to kpatch_status.
- Introduce new enums and helper functions for kpatch_status.
- Use hash_del_rcu instead of hlist_del_rcu.
- Rename get_committed_func to kpatch_get_committed_func.
- Use ACCESS_ONCE for kpatch_operation to prevent compiler optimization.
- Fix to remove (!func || func->updating) condition from NMI check.
- Add more precise comments.
- Fix setting order of kpatch_status and kpatch_operation.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Handle registering error to unroll the ftrace filter.
This also introduces get_kpatch_func() and
kpatch_remove_funcs_from_filter() for holding up
redundant loops.
Changes from v2:
- Rebased on the latest kpatch.
Changes from v1:
- Rename get_kpatch_func to kpatch_get_func.
- Fix function definition style issue.
- Do not jump to a label in "if" block.
- Rollback the ftrace user counter if we hit an error.
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
The kpatch utility is now user friendly enough that it can be used
instead of direct insmods. We should encourage people to use it, since
we will soon be adding user space functionality above and beyond
insmod/rmmod when loading and unloading.
Allow "kpatch load" to find the core module when kpatch is run directly
from the git tree. This gives the user the option to use the kpatch
utility directly without having to do a "make install".
While debugging the code for the bug table logic, I found it useful to
know which rela section and entry the error occurred on.
Signed-off-by: Seth Jennings <sjenning@redhat.com>
This commit adds a new function to properly handle the bug table.
It works by going through .rela__bug_table, after the changed
function symbols have already been marked, and rewrites the section
including only the relocations pertaining to bug entries for
changed functions.
The __bug_table section itself is not modified resulting in
"blank" bug entries: ones whose IP and filename pointers will
not be relocated and, therefore, will be zero. While a waste
of space, it simplifies the code not to remove these blank
entries. They do no harm.
Signed-off-by: Seth Jennings <sjenning@redhat.com>
The section header size is calculated at output time by libelf
and we use it as a read-only value from read files.
With the next patch we are changing the size of the .rela__bug_table
section. Lets use d_size instead since it is the value that tells
libelf how to calculate sh_size at output time.
Signed-off-by: Seth Jennings <sjenning@redhat.com>
Allow bundling of .bss.* sections that are the result of -fdata-sections
so that rela sections referencing data in bss sections by section symbol
can be replaced with the object symbol so it can be linked to the existing
data object in the kernel.
Signed-off-by: Seth Jennings <sjenning@redhat.com>