I recently stumbled upon the following warning on my production Ceph cluster for an OSD on one of our machines:
root@ceph-8:~# ceph health detail HEALTH_WARN 1 OSD(s) experiencing BlueFS spillover; 1 stray daemon(s) not managed by cephadm [WRN] BLUEFS_SPILLOVER: 1 OSD(s) experiencing BlueFS spillover osd.211 spilled over 282 MiB metadata from 'db' device (7.0 GiB used of 8.7 GiB) to slow device I didn’t find much info about it, and it was mainly when people did play with limits around the blocks.