From 81ea932c1e342cca59a5fc201be0d71816b436f3 Mon Sep 17 00:00:00 2001 From: Thomas Schoebel-Theuer Date: Wed, 16 Jan 2019 15:11:51 +0100 Subject: [PATCH] doc: explain zfs snapshots + architecture --- docu/images/raid-lvm-architecture.fig | 59 +++++++++ docu/images/zpool-architecture.fig | 47 +++++++ docu/mars-manual.lyx | 183 ++++++++++++++++++++++++++ 3 files changed, 289 insertions(+) create mode 100644 docu/images/raid-lvm-architecture.fig create mode 100644 docu/images/zpool-architecture.fig diff --git a/docu/images/raid-lvm-architecture.fig b/docu/images/raid-lvm-architecture.fig new file mode 100644 index 00000000..aea8b08f --- /dev/null +++ b/docu/images/raid-lvm-architecture.fig @@ -0,0 +1,59 @@ +#FIG 3.2 Produced by xfig version 3.2.5c +Landscape +Center +Metric +A4 +100.00 +Single +-2 +1200 2 +6 270 2610 630 2880 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 450 2700 180 90 450 2700 630 2610 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 450 2790 180 90 450 2790 630 2700 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 270 2700 270 2790 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 630 2700 630 2790 +-6 +6 1170 2610 1530 2880 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 1350 2700 180 90 1350 2700 1530 2610 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 1350 2790 180 90 1350 2790 1530 2700 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 1170 2700 1170 2790 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 1530 2700 1530 2790 +-6 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 720 2250 540 2610 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 1080 2250 1260 2610 +2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 + 450 1980 1350 1980 1350 2250 450 2250 450 1980 +2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 + 450 270 1350 270 1350 720 450 720 450 270 +2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 + 450 1620 1350 1620 1350 1890 450 1890 450 1620 +2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 + 450 1260 1350 1260 1350 1530 450 1530 450 1260 +2 2 1 1 0 -1 50 -1 -1 4.000 0 0 -1 0 0 5 + 450 810 1350 810 1350 1080 450 1080 450 810 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 900 720 900 810 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 900 1080 900 1260 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 900 1530 900 1620 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 900 1890 900 1980 +4 1 0 50 -1 2 15 0.0000 4 45 180 900 2790 ...\001 +4 1 0 50 -1 -1 10 0.0000 4 135 780 900 3060 48 spindles\001 +4 1 0 50 -1 -1 10 0.0000 4 105 195 900 450 zfs\001 +4 1 0 50 -1 -1 10 0.0000 4 135 660 900 630 snapshots\001 +4 1 0 50 -1 2 15 0.0000 4 45 180 900 2430 ...\001 +4 1 0 50 -1 -1 10 0.0000 4 105 405 900 2160 RAID\001 +4 1 0 50 -1 -1 10 0.0000 4 120 660 900 1800 pvs + vgs\001 +4 1 0 50 -1 -1 10 0.0000 4 135 735 900 990 replication\001 +4 1 0 50 -1 -1 10 0.0000 4 105 195 900 1440 lvs\001 +4 1 0 50 -1 -1 10 0.0000 4 105 270 270 1440 10x\001 +4 1 0 50 -1 -1 10 0.0000 4 105 270 270 990 10x\001 +4 1 0 50 -1 -1 10 0.0000 4 105 270 270 540 10x\001 diff --git a/docu/images/zpool-architecture.fig b/docu/images/zpool-architecture.fig new file mode 100644 index 00000000..3fa60d7c --- /dev/null +++ b/docu/images/zpool-architecture.fig @@ -0,0 +1,47 @@ +#FIG 3.2 Produced by xfig version 3.2.5c +Landscape +Center +Metric +A4 +100.00 +Single +-2 +1200 2 +6 270 2610 630 2880 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 450 2700 180 90 450 2700 630 2610 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 450 2790 180 90 450 2790 630 2700 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 270 2700 270 2790 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 630 2700 630 2790 +-6 +6 1170 2610 1530 2880 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 1350 2700 180 90 1350 2700 1530 2610 +1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 1350 2790 180 90 1350 2790 1530 2700 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 1170 2700 1170 2790 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 1530 2700 1530 2790 +-6 +6 90 540 450 720 +4 1 0 50 -1 -1 10 0.0000 4 105 270 270 720 10x\001 +-6 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 720 2250 540 2610 +2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 + 1080 2250 1260 2610 +2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 + 450 270 1350 270 1350 2250 450 2250 450 270 +2 1 1 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 2 + 450 1260 1350 1260 +4 1 0 50 -1 2 15 0.0000 4 45 180 900 2790 ...\001 +4 1 0 50 -1 -1 10 0.0000 4 135 780 900 3060 48 spindles\001 +4 1 0 50 -1 -1 10 0.0000 4 135 390 900 1620 zpool\001 +4 1 0 50 -1 -1 10 0.0000 4 135 885 900 1980 functionality\001 +4 1 0 50 -1 -1 10 0.0000 4 120 660 900 1800 pvs + vgs\001 +4 1 0 50 -1 -1 10 0.0000 4 105 195 900 450 zfs\001 +4 1 0 50 -1 -1 10 0.0000 4 135 660 900 630 snapshots\001 +4 1 0 50 -1 -1 10 0.0000 4 105 495 900 810 +RAID\001 +4 1 0 50 -1 2 15 0.0000 4 45 180 900 2430 ...\001 +4 1 0 50 -1 20 8 0.0000 4 135 810 900 1350 interface\001 +4 1 0 50 -1 20 8 0.0000 4 135 720 900 1260 internal\001 diff --git a/docu/mars-manual.lyx b/docu/mars-manual.lyx index e79f3f7a..45848e5d 100644 --- a/docu/mars-manual.lyx +++ b/docu/mars-manual.lyx @@ -2525,6 +2525,189 @@ The last item means that ZFS by itself does not protect against amok-running enterprise-critical applications. \end_layout +\begin_layout Standard +\noindent +\begin_inset Graphics + filename images/lightbulb_brightlit_benj_.png + lyxscale 12 + scale 7 + +\end_inset + +Notice that zfs snapshots can be combined with DRBD or MARS, because zfs + snapshots are residing at +\emph on +filesystem +\emph default + layer, while DRBD / MARS replicas are located at +\emph on +block +\emph default + layer. + Just create your zpools at the +\emph on +top +\emph default + of DRBD or MARS virtual devices, and import / export them +\emph on +individually +\emph default + upon handover / failover of each LV. +\end_layout + +\begin_layout Standard +\noindent +\begin_inset Graphics + filename images/MatieresCorrosives.png + lyxscale 50 + scale 17 + +\end_inset + + There is a +\series bold +\emph on +fundamental +\series default +\emph default + difference between zpools and classical RAID / LVM stacked architectures. + Some zfs advocates are propagating zpools as a replacement for both RAID + and LVM. + However, there is a +\series bold +massive difference +\series default + in architecture, as explained in the following example (10 logical resources + over 48 physical spindles), achieving practically the +\series bold +\emph on +same +\series default + zfs snapshot functionality +\emph default + from a user's perspective, but in a different way: +\end_layout + +\begin_layout Standard +\noindent +\align center +\begin_inset Graphics + filename images/raid-lvm-architecture.fig + height 6cm + +\end_inset + + +\begin_inset Graphics + filename images/zpool-architecture.fig + height 6cm + +\end_inset + + +\end_layout + +\begin_layout Standard +\noindent +When RAID functionality is executed by zfs, it will be located at the +\emph on +top +\emph default + of the hierarchy. + On one hand, this easily allows for different RAID levels for each of the + 10 different logical resources. + On the other hand, this +\emph on +exposes +\emph default + the +\series bold +physical spindle configuration +\series default + to the topmost filesystem layer (48 spindles in this example). + There is no easy way for replication of these +\emph on +physical properties +\emph default + in a larger / heterogenous distributed system, e.g. + when some hardware components are replaced over a longer period of time + (hardware lifecycle, or LV Football as explained in chapter +\begin_inset CommandInset ref +LatexCommand ref +reference "chap:LV-Football" + +\end_inset + +). + Essentially, only replication of +\emph on +logical +\emph default + structures like snapshots remains as the only reasonable option, with its + drawbacks as explained above. +\end_layout + +\begin_layout Standard +\noindent +\begin_inset Graphics + filename images/MatieresCorrosives.png + lyxscale 50 + scale 17 + +\end_inset + + There is another argument: zfs tries to +\emph on +hide +\emph default + its internal structures and interfaces from the sysadmins, forming a more + or less +\series bold +monolithic +\begin_inset Foot +status open + +\begin_layout Plain Layout +Some sysadmins acting as zfs advocates are reclaiming this as an advantage, + because they need to understand only a single tool for managing +\begin_inset Quotes eld +\end_inset + +everything +\begin_inset Quotes erd +\end_inset + +. + However, this is a short-sighted argument when it comes to +\emph on +true +\emph default + flexibility as offered by a component-based system, where multiple types + of hardware / software RAID, multiple types of LVM functionality, and much + more can be almost orthogonally combined in a very flexible way. +\end_layout + +\end_inset + + architecture +\series default + as seen from outside. + This violates the classical +\emph on +layering rules +\emph default + from Dijkstra. + In contrast, classical LVM-based configurations are +\series bold +component oriented +\series default +, according to the +\series bold +Unix philosophy +\series default +. +\end_layout + \begin_layout Section Local vs Centralized Storage \begin_inset CommandInset label