A short introduction to the first time user of an erasure coded pool.
It includes a reminder of how it relates to cache tiering and links to
define new profiles with an example.
There was examples in the developer documentation but the operator
expects to find such a guide in the rados operations chapter.
http://tracker.ceph.com/issues/9970Fixes: #9970
Signed-off-by: Loic Dachary <ldachary@redhat.com>
This commit introduces some updates for the OpenStack Juno release. New
flags have been added, many trailing spaces were removed and a new
recommendation for Glance cache management has been added too.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Not sure how 'quick' this really is now compared with
the full filesystem instructions, but let's not leave
it incomplete.
Signed-off-by: John Spray <john.spray@redhat.com>
LRC now uses Jerasure as the default EC backend. But it is actually
possible to switch to other backend like Isa using the low level
configuration. This commits Adds documents on how to specify the EC
backend in each LRC layer:
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
Add release note
New librados interface
New pg_nls_response_t over the wire protocol
Ignore internal namespace (.ceph_internal)
Enhance ObjListCtx to keep independent IoCtxImpl so nspace won't change out from under listing code
Add ListObject with private implementation ListObjectImpl to return from iterator
Add EINVAL error for old librados interface when LIBRADOS_ALL_NSPACES set
Add throw to old librados c++ interface when all_nspaces set
Fixes: #9031
Signed-off-by: David Zafman <dzafman@redhat.com>
vstart.sh now creates the users for the default configuration
for the s3-tests, available on https://github.com/ceph/s3-tests.
Also updated the documentation to show the correct RadosGW port.
Signed-off-by: Luis Pabón <lpabon@redhat.com>
* Update the ceph tell from ceph daemon tell id to the new
ceph tell deamon.id form
* Add usages examples for easier copy / paste
* Add MON to the list of daemons that can be profiled
* Document CEPH_HEAP_PROFILER_INIT=true
* Remove trailing empty lines
Signed-off-by: Loic Dachary <loic-201408@dachary.org>
When a cluster has few OSDs (less than 50) propose a preselection of
values: as long as the number of placement groups is not too small nor
too large, it won't make much of a difference anyway.
Users of small clusters tend to blindly apply the (OSD*100)/(pool size)
formula and worry about chosing a wrong value because they do not
understand the tradeoffs. The preselection will hopefully save them from
this uncertainty.
Add an explanation of how placement groups relate to OSDs, CRUSH and
pools to help understand the tradeoffs. Explain the
tradeoffs (durability, distribution and resource usages) with examples.
Signed-off-by: Loic Dachary <loic-201408@dachary.org>
Reviewed-by: Gerben Meijer <infernix@gmail.com>
Reviewed-by: Laurent Guerby <laurent@guerby.net>