ceph/PendingReleaseNotes
Sage Weil 3ea2e518d2 mon/OSDMonitor: prevent pg_num from exceeding mon_pg_warn_max_per_osd
Check total pg count for the cluster vs osd count and max pgs per osd
before allowing pool creation, pg_num change, or pool size change.

"in" OSDs are the ones we distribute data too, so this should be the right
count to use.  (Whether they happen to be up or down at the moment is
incidental.)

If the user really wants to create the pool, they can change the
configurable limit.

Signed-off-by: Sage Weil <sage@redhat.com>
2017-09-14 12:10:13 -04:00

23 lines
963 B
Plaintext

>= 12.2.0
---------
- *CephFS*:
* Limiting MDS cache via a memory limit is now supported using the new
mds_cache_memory_limit config option (1GB by default). A cache reservation
can also be specified using mds_cache_reservation as a percentage of the
limit (5% by default). Limits by inode count are still supported using
mds_cache_size. Setting mds_cache_size to 0 (the default) disables the
inode limit.
* The maximum number of PGs per OSD before the monitor issues a
warning has been reduced from 300 to 200 PGs. 200 is still twice
the generally recommended target of 100 PGs per OSD. This limit can
be adjusted via the ``mon_pg_warn_max_per_osd`` option on the
monitors.
* Creating pools or adjusting pg_num will now fail if the change would
make the number of PGs per OSD exceed the configured
``mon_pg_warn_max_per_osd`` limit. The option can be adjusted if it
is really necessary to create a pool with more PGs.