kpatch/kmod/patch/kpatch.lds.S

51 lines
1.5 KiB
ArmAsm
Raw Normal View History

implement per-object patching/relocations The recent module patching code has exposed some problems with our data structures. We currently patch the funcs and dynrelas individually, which is kind of scary now that different objects can be patched at different times. Instead it's cleaner and safer to group them by patched object. This patch implements per-object patching and relocations by refactoring the interfaces: - Completely separate the create-diff-object <-> patch module interface from the patch module <-> core module interface. create-diff-object will include "kpatch-patch.h" but not "kpatch.h". Thus, create-diff-object has no knowledge about the core module's interfaces, and the core module has no knowledge about the patch module's special sections. - Newly added kpatch-patch.h defines the format of the patch module special sections. It's used by create-diff-object to create the special sections and used by the patch module to read them. - kpatch.h still defines the core module interfaces. Each kpatch_module has a list of kpatch_objects for each module object to be patched. Each kpatch_object has a list of kpatch_funcs and a list of kpatch_dynrelas. The patch module creates these lists when populating kpatch_module. This way of structuring the data allows us to patch funcs and dynrelas on a per patched object basis, which will allow us to catch more error scenarios and make the code easier to manage going forward. It also allows the use of much more common code between kpatch_register() and kpatch_module_notify().
2014-06-17 14:51:47 +00:00
__kpatch_funcs = ADDR(.kpatch.funcs);
__kpatch_funcs_end = ADDR(.kpatch.funcs) + SIZEOF(.kpatch.funcs);
#ifdef __KPATCH_MODULE__
__kpatch_dynrelas = ADDR(.kpatch.dynrelas);
__kpatch_dynrelas_end = ADDR(.kpatch.dynrelas) + SIZEOF(.kpatch.dynrelas);
__kpatch_checksum = ADDR(.kpatch.checksum);
#endif
SECTIONS
{
.kpatch.callbacks.pre_patch : {
__kpatch_callbacks_pre_patch = . ;
*(.kpatch.callbacks.pre_patch)
__kpatch_callbacks_pre_patch_end = . ;
/*
* Pad the end of the section with zeros in case the section is empty.
* This prevents the kernel from discarding the section at module
* load time. __kpatch_callbacks_pre_patch_end will still point to the
* end of the section before the padding. If the
* .kpatch.callbacks.pre_patch section is empty,
* __kpatch_callbacks_pre_patch equals __kpatch_callbacks_pre_patch_end.
*/
QUAD(0);
}
.kpatch.callbacks.post_patch : {
__kpatch_callbacks_post_patch = . ;
*(.kpatch.callbacks.post_patch)
__kpatch_callbacks_post_patch_end = . ;
QUAD(0);
}
.kpatch.callbacks.pre_unpatch : {
__kpatch_callbacks_pre_unpatch = . ;
*(.kpatch.callbacks.pre_unpatch)
__kpatch_callbacks_pre_unpatch_end = . ;
QUAD(0);
}
.kpatch.callbacks.post_unpatch : {
__kpatch_callbacks_post_unpatch = . ;
*(.kpatch.callbacks.post_unpatch)
__kpatch_callbacks_post_unpatch_end = . ;
QUAD(0);
}
.kpatch.force : {
__kpatch_force_funcs = . ;
*(.kpatch.force)
__kpatch_force_funcs_end = . ;
QUAD(0);
}
}