DMCA

Minio vs ceph benchmark

MadOut2 BigCityOnline Mod Apk


Gluster was purchased in 2011 by Red Hat. 27 stars with 57 reviews. OpenStack Pure Vs Ceph Performance Test result. 9 MP front camera, 1900 mAh battery, 8 GB storage, 1. You can then build a larger, scale-out ceph/glusterFS solution that is used and purpose-built to handle the massive capacity of your long-term file store, and focus it towards the heavy read use while decreasing the Mar 31, 2020 · glusterfs、ceph、minio在开源界,属于比较流行应用较广的三个分布式存储系统。. This is the second episode of the performance blog series on RHCS 3. May 29, 2018 · The Ceph reference architecture is tuned for small random block performance in both the FileStore and BlueStore cases. Ceph - Ceph is a distributed object store and file system designed to provide excellent performance Ceph in IBM Cloud Private is deployed and managed via Rook. MinIO is rated 9. Sep 14, 2021 · PDF | The line between HPC and Cloud is getting blurry: Performance is still the main driver in HPC, while cloud storage systems are assumed to offer | Find, read and cite all the research you (Default is 5Mbps. ) #1168. Test methodology a. Dec 27, 2020 · Minio vs ceph 2019 Minio vs ceph 2019. #Ceph uses object storage, which means it stores data in binary objects spread out across lots of computers. Minio did not reveal the disk drives used but did say the drives were working full tilt and Jun 25, 2019 · In comparison with Ceph, DRBD & LINSTOR are easier to troubleshoot, recover, repair, debug, and easier to intervene manually if required, also mainly due to its simplicity. From the . priority: low. For sys admin the better maintainability and a less complex environment can be crucial. Jan 25, 2017. To try Ceph, see our Getting Started guides. 3 was not working Minio Gateway S3. One had to be very careful in how they assigned their crush maps and cache tiering to get it to work right otherwise performance would be impacted and data would not be distributed evenly. On the other hand, Minio is detailed as "AWS S3 open source alternative written in Go". Approach to storing data 2. Both are healthy, open source projects that are actively used by customers around the world; organizations use Ceph and Swift for different reasons. Swift is close, but a slightly different native API (there's an adapter but I'm not sure of its status). harshavardhana added pending-discussion labels on Feb 24, 2016. iperf show between 6 to 11 gbps, latency is … Introduction In this work we will show the testing activity carried out on aforementioned distributed open source le-systems in order to check the capability of Jul 03, 2019 · This guide will dive deep into comparison of Ceph vs GlusterFS vs MooseFS vs HDFS vs DRBD. 1M + Docker pulls. In computing,It is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. If you are searching for Minio Gateway S3, simply look out our article below : Please change the endpoint, access-key, secret-key, bucket and opts for your local Minio setup. fio did a total of 1GB of IO at, for Pure - 227. Minio is an object storage server compatible with Amazon S3 and licensed under Apache 2. If you use an S3 API to store files (like minio does) you give up power and gain nothing. Ceph is a distributed object store and file system designed to provide excellent performance logo Minio. Whether you would wish to attach block devices to your virtual machines or to store unstructured data in an object store, Ceph Aug 06, 2019 · Unfortunately, while stress-testing Ceph volumes I consistently ran into this issue which makes Ceph unstable. MinIO Server MinIO Server is a high performance open source S3 compatible object storage system designed for hyper-scale private data infrastructure. 04 LTS Server (Linux kernel 5. RBD. MongoDB in 2021 by cost, reviews, features, integrations, deployment, target market, support options, trial offers, training options, years in business, region, and more using the chart below. Ceph cluster Cloud service providers which provision VM service can use iSCSI. 72 stars with 86 reviews while Red Hat has a rating of 4. 0, while Red Hat Ceph Storage is rated 7. Mar 23, 2017 · Minio co-founder and CEO Anand Babu Periasamy, was one of the co-founders of Gluster, the company behind GlusterFS. At 100 clients, tail latency is decreased by 4. About Minio Gateway S3. iSCSI target. Also LeoFS (which despite the name is not a filesystem). Jul 03, 2018 · Finally, we provide a performance evaluation of Ceph with iWARP RDMA, which demonstrates up to 17 percent performance improvement compared with the TCP/IP stack 6. Tests document performance of components on a particular test, in specific systems. Erasure-coded pools (EC pool) is outside the scope of this document. There's also the fact that Ceph performance is directly tied to the number of OSDs. Periasamy said he wanted to avoid the mistakes that GlusterFS made. 1 Ceph-RGW Seafile - High performance file syncing and sharing, with also Markdown WYSIWYG editing, Wiki, file label and other knowledge management features. s3bench is supported as a single container, composed service or K8S/Openshift job. Compare vs. VM execution is done through qemu-system-x86_64, without using libvirt. iSCSI gateway. The tests also looked at the behavior when a disk fault occurs. Mar 24, 2021 · This post explains how I measured Ceph RBD performance with block/network virtualization technology (virtio and vhost), and the result. Resources on RWX via NFS/CephFS. So you are better off using NFS, samba, webdav, ftp, etc. If you’ve been following the Gluster and Ceph communities for any length of time, you know that we have similar visions for open software-defined storage and are becoming more competitive with each passing day. 3 ms latency, 270K IOPS for 4K random write with Nov 12, 2013 · On the Gluster vs Ceph Benchmarks. A side-by-side Object Storage product comparison of features, performance, functionality and specs. 90 9000/TCP 94s. What was tested b. level 2. · 5m. Seems like pure is 13. Azure Blob Storage - Use Azure Blob Storage to store all kinds of Both ceph and MinIO's implementations differ from AWS original S3 server implementation, in subtle ways. x) version of MooseFS, although this document is from 2013 and a lot of information are outdated (e. 81 GB/sec read and 8. Jul 24, 2019 · Presto on Minio vs Amazon S3, Apache Spark with Minio back-end vs AWS S3. MinIO is software-defined and is 100% open source under GNU AGPL v3. Swift debate is that neither of the two object storage systems is better than the other; they serve different purposes, so both will persist. Both ceph and MinIO's implementations differ from AWS original S3 server implementation, in subtle ways. 19) Guest: Ubuntu 20. 3X. As a storage administrator, you can benchmark performance of the Red Hat Ceph Storage cluster. Rook is a framework to make it easy to bring storage back- endstoruninsideof Kubernetes. Ceph is a robust storage system that uniquely delivers object, block (via RBD), and file storage in one unified system. 2013-11-12. Assignees. Ceph’s performance is decent, as shown in the benchmarks MinIO offers high-performance, S3 compatible object storage. The performance problems –Ceph* on all flash array 4K and 64K performance scaling • 1. We have been rivals in a similar space for some time, but on friendly terms Jan 14, 2020 · Ceph’s algorithm, CRUSH, liberates storage clusters from the scalability and performance limitations imposed by centralised data devices. The Ceph RADOS Block Device is integrated to work as a back end with NEW WITH CEPH-CSI AND ROOK Much investment in ceph-csi RWO and RWX support via RBD and/or CephFS Snapshots, clones, and so on Rook 1. For managing storage, Ceph uses its own tools. Dec 18, 2018 · Use a number of high-performance SSDs, build and tune the unit(s) directly for high performance at the appropriate block sizes. Mainly deployed in cloud based installations and provides a scalable and reliable alternative to traditional storage applications. I dont think I am saturating network links yet because the tests that I did yield a speed of 110 MiB/s. When comparing Minio and GlusterFS, you can also consider the following products. May 29, 2018 · RBD FIO 4KB Random Write Performance: FileStore vs. Even a 5-10 minute project will run a couple of GBs of memory. I took a sample output and plotted the chart to compare the performance of Pure vs Ceph. Client. Ceph’s architecture is of significant value in large-scale, capacity-optimized storage, where performance is not a significant concern. Red Hat Ceph Performance & Sizing Guide Jose De la Rosa September 7, 2016 2. GlusterFS has its origins in a highly-efficient, file-based storage system that What you get in return is easier scalability and performance. Purity//FB supports cloud mobility with object replication and disaster recovery with file replication. Large messages increase the memory pressure on the broker JVM. So, we do not suggest Gluster in handling data that requires high speed. Create and manage multiple Kubernetes clusters using KVM on a bare metal Fedora 29 server. klausenbusk opened this issue on Feb 24, 2016 · 7 comments. Red Hat® Ceph® Storage is an open, massively scalable, simplified storage solution for modern data pipelines. when doing this you should have SSDs for the Swift container servers). Ceph is an emerging storage solution with object and block storage capabilities. To learn more about Ceph, see our Architecture section. You have to poll if it is there, which might take a long time for bigger files (I guess because of Mar 18, 2020 · This tool integrates with every S3 API (aws, minio, Ceph etc) and can help you analyze your S3 service’s behavior using ELK stack for visualizations and dashboards. The command-line interface also allows you to work with object storage data using familiar Unix commands, like ls and diff. Minio is an open-source minimal cloud storage server . Ceph performance benchmark. conf, though that will most likely reduce 4KB random block workloads. Differences in hardware, software, or config uration will affect actual performance. . Registering custom widgets Register all custom and third-party widgets starting with HCL Connections™ 6 CR1 \IFR1 to ensure they meet security requirements and can display in communities. Sep 24, 2021 · Last updated September 24th, 2021 for updates to Cloudian HyperStore. Ceph uniquely delivers object, block, and file storage in one unified system. 25% more performance than Ceph distributed Storage system. 1)abcd四台标准的X86 PCServer. What you get in return is easier scalability and performance. Download now! Minio and Ceph both fit this description, with the latter also offering block and file access. 2 BlueStore running on the all-flash cluster. May 27, 2021 · Performance with very minio vs ceph benchmark dependencies email to the mailing list to report bug!, something OpenZFS has to improve upon in the archive triggered the tag 17829 times old school gained with,! Tls verification by app category or individual apps when developing a product that no level 1. As a result StorPool often replaces Ceph systems, which cannot deliver enough performance. Ceph Overview a. For more complete infor mation about performance and May 06, 2019 · Recap: In Blog Episode-1 we have covered RHCS, BlueStore introduction, lab hardware details, benchmarking methodology and performance comparison between Default Ceph configuration vs Tuned Ceph configuration. Background Red Hat Ceph*, one of today’s most popular distributed storage systems, provides scalable and reliable object, block, and file storage services in a single platform 1 . Compared to the average respondent, the 27% of Kubernetes users who were storage-challenged were more likely to evaluate Rook (26% vs 16%), Ceph (22% vs 15%), Gluster (15% vs 9%), OpenEBS (15% vs 9% Jan 29, 2020 — ceph vs Minio · ceph as "A free-software storage platform". APP. OSD. Consult other sources of information to evaluate performance as you consider your purchase. I frequently get the same question from enterprise customers who say, “We heard this Ceph thing When comparing Minio and Ceph, you can also consider the following products. Minio widely deployed across the world with over 164. 3 Ceph Overview 4. By being lightweight and extremely capable, StorPool can deliver performance, which is simply impossible to achieve with Ceph. Results a. Additionally minio doesn't seem to sync files to the file system, so you can't be sure a file is actually stored after a PUT operation Sep 02, 2021 · Ceph Alternatives. Nov 12, 2013 · On the Gluster vs Ceph Benchmarks. MinIO has a rating of 4. Architecture b. Jun 28, 2021 · ClickHouse now fully supports both AWS S3 and MinIO as S3-compatible object storage services. What’s the difference between MinIO, Mnesia, and MongoDB? Compare MinIO vs. Mnesia vs. Open-sourced under an Apache 2. Introduction Mar 18, 2020 · This tool integrates with every S3 API (aws, minio, Ceph etc) and can help you analyze your S3 service’s behavior using ELK stack for visualizations and dashboards. Google Cloud Storage - Google Cloud Storage offers developers and IT organizations durable and highly available object storage. I'm looking for something that can handle hundreds of files per second, write, read, and delete, with sometimes bursts into the thousands, this is orders of magnitude Oct 31, 2017 · Ceph block storage uses a Ceph Block Device, which is a virtual disk that can be attached to bare-metal Linux-based servers or virtual machines. The line chart is based on worldwide web search for the past 12 months. Oct 26, 2018 · Minio is the best Private Cloud Storage. 16. Ceph vs Swift – An Architect’s Perspective. Recommendations 3. dm-1. In computing,It is a free-software storage platform, implements object storage on a . Scality has open-sourced their implementation as Zenko. On the other hand, the top reviewer of Red Hat Ceph Storage writes "Excellent user interface, good configuration capabilities and quite stable". Ceph is highly reliable, easy to manage, and free. BlueStore. Mar 01, 2021 · GlusterFS vs Ceph Performance – Who wins? Gluster can handle a high number of files but when coming into performance this massive data handling slows it down. It builds a private cloud system with OpenStack of use and performance. ceph worked more reliably, but IIRC, both for MinIO and ceph, there is no guarantee that a file you upload is readable directly after upload. 96 version I ran, it really is intended to be used for massive data centers in the petabytes. 2)每台PCServer提供2个brick Chapter 7. There is also a large decrease in the tail latency of Ceph at higher FIO client counts with BlueStore. 57 GB/sec write performance using AWS bare-metal, storage-optimised instances with 25GbitE networking. 0 license, the Minio object storage server provides an AWS S3-compatible storage service Jul 13, 2020 · Again, running Minio on a local network (actually same network switch) computer, just default setup pointing to a directory. Due to the technical differences between GlusterFS and Ceph, there is no clear winner. So 3 sounds reasonable and a good balance between performance and OSD loss. It offers an API for integrating with S3 object storage servers, as well as a command-line interface that can communicate with S3. Ceph storage pools can be either replicated or erasure -coded as appropriate for the desired application and cost model. How it was tested & measured 3. It is not clear yet whether it’s a bug in Ceph or a problem in how Rook manages Ceph. Tweaking some memory settings seems to help but does not eliminate the problem entirely. If Ceph could export block service with good performance, it would be easy to Oct 08, 2018 · I am looking into upgrading my cluster. dnbstd. Universal operators streamline your Kubernetes deployments and operations across all clouds. More OSDs gives you more throughput especially with low performance hard drives. There are more than 10 alternatives to Ceph for a variety of platforms, including Linux, Self-Hosted solutions, Mac, Windows and BSD. 现在重点介绍下,这三个分布式系统的架构以及和raid的类比,让大家把存储明明白白的搞透彻。. Next, they dive into the underlying design of MinIO. g. I noticed during the test that Ceph was totally hammering the servers – over 200% CPU utilization for the Ceph server processes, vs. The power of Ceph can transform your company’s IT infrastructure and your ability to manage vast amounts of data. We have been rivals in a similar space for some time, but on friendly terms StorPool has fewer components than Ceph, in order to eliminate pieces that do not add to the performance or reliability. May 19, 2019 · Ceph and gluster have, essentially, the same tools, just a different approach. Rook is more than just Ceph. 4. Minio is distributed object storage server written in Go and Open source under Apache version 2. 340 / 110 = 3, 1. Minio. Jul 03, 2019 · This guide will dive deep into comparison of Ceph vs GlusterFS vs MooseFS vs HDFS vs DRBD. Minio Vs Ceph. Engineered for data analytics, artificial intelligence/machine learning (AI/ML), and emerging workloads, Red Hat Ceph Storage delivers software-defined storage on your choice of industry-standard hardware. sdx. 6. Host: Fedora 33 (Linux kernel 5. MooseFS had no HA for Metadata Server at that time). Gluster. When engineers talk about storage and Ceph vs Swift, they usually agree that one of them is the best and the other a waste of time. Key findings b. 0. AgFlow, codebeat, and Minio are some of the popular companies that use Minio, whereas ceph is used by DigitalOcean, Twilio SendGrid, and Mirantis. Comments. I'm looking for something that can handle hundreds of files per second, write, read, and delete, with sometimes bursts into the thousands, this is orders of magnitude Dec 19, 2017 · Minio. Splunk Find out how MinIO is delivering performance at scale for Splunk SmartStores Teradata Discover why MinIO is the Native Object Store (NOS) of choice for at-scale Teradata deployments Azure to AWS S3 Gateway Learn how MinIO allows Azure Blob to speak Amazon's S3 API Ceph vs Swift – An Architect’s Perspective. Established formula: Journal number = ( SSD seq write speed) / ( spinning disk seq write speed) Example with an e nterprise level SSD (~340 MB/s seq writes with o_direct and d_sync) and an e nterprise level spinning disk (110 MB/s seq writes). The Ceph Storage Cluster is the foundation for all Ceph deployments. 4. Ceph is a proven distributed storage software that supports block access, for which there is strong demand from users. Dec 07, 2019 · Minio - A High Performance Object Storage Minio is high performance object storage solution which enabled us to built native cloud infrastructure for data management , reporting and analytics. sdy. minio vs Nextcloud. The Ceph Reliable Autonomic Distributed Object Store (RADOS) provides block storage capabilities, such as snapshots and replication. 2M IOPS for 4K random read @ 1. FlashBlade’s scale-out metadata architecture can handle tens of billions of files and objects with maximum performance and rich data services. Block service exported by Ceph via iSCSI protocol. Compares products from Dell EMC, IBM, HGST and more. BlueStore provides a ~18% increase in IOPS and a ~15% decrease in average latency. Minio is an open source object storage solution written mostly in Go. Depending on the architecture, both solutions will significantly outpace each other and have great performance. For those new to GlusterFS, a brick is a basic unit of storage. That being said, there's no reason you can't stretch the OSD memory that far, but it's important to keep in mind that the full 8GB on the RPi is not truly usable by the OSD daemons. Labels. Ceph is HA and hard for administration system. Summary. AWS S3 SDK built from source, both machines Fedora 31. When comparing Minio and Ceph, you can also consider the following products. Partners will introduce plug-ins in support of the platform over time. Ceph. Additionally minio doesn't seem to sync files to the file system, so you can't be sure a file is actually stored after a PUT operation Jan 14, 2013 · The real surprise was the last test, where GlusterFS beat Ceph on deletions. Should I add 10Gb ethernet cards or SSDs or both considering limited budget. The bottom line in the Ceph vs. Architecture¶. Fetch a list of the preprocessed training and testing files from MinIO; Create a tf. The HDD test showed a 16-node Minio cluster achieving 10. Multipath. 67) Apr 14, 2021 · Maybe I’ll be the first one to get some proper NFS vs CephFS performance comparisons. rook makes administration of a ceph cluster really easy - if you know the basics of ceph. It is a high performance distributed object storage server, designed for large-scale Private Cloud infrastructure. 2 Agenda 1. 1. Developers describe ceph as "A free-software storage platform". The results demonstrated that this Jun 12, 2014 · Interesting to see someone comparing Ceph vs Swift performance. GlusterFS - GlusterFS is a scale-out network-attached storage file system. by these services. Also, the numbers at 1K files weren’t nearly as bad. Ceph: scalable object storage with block and file capabilities Gluster: scalable file #storage with object capabilities The differences, of course, are more nuanced than this, based on they way each program handles the data it stores. In this comparison, we will test the performance of AWS S3 and MinIO when used to store table data from two of our standard datasets. 0 - 1. Ceph is basically an object-oriented memory for unstructured data, whereas GlusterFS uses hierarchies of file system trees in block storage. I would be highly interested in the Ceph vs Swift performance degradation when putting a large amount (millions) of objects on a bit beefier hardware (e. With FileStore, reaching a higher object read throughput may be achievable by altering ceph. 1 Turn-key ceph-csi by default Dynamic bucket provisioning ObjectBucketClaim External cluster mode Run mons or OSDs on top of other PVs Upgrade improvements . Minio is a lightweight, Amazon S3-compatible distributed object storage server. Mar 06, 2018 · Ceph data is strongly consistent across the cluster, whereas Swift data is eventually consistent, but it may take some time before data is synchronized across the cluster. MinIO View Software. It is an ideal tool for big and standalone cloud environment with high security requirements and delivery of mission critical availability. The higher availability also results in a better reliability. Ceph’s lack of compression and de-duplication combined with its leveraging of erasure coding for object storage, highlight it as a good choice for storage of large-scale data such as backups, images, Jun 12, 2014 · Interesting to see someone comparing Ceph vs Swift performance. MinIO creates erasure-coding sets of 4 to 16 drives per set. In order to read from ceph you need an answer from exactly one copy of the data. Minio is simplest for your`s app. glotzerhotze. Ceph is a very popular storage solution for OpenStack deployments given its free nature and deep integration. glusterfs架构介绍. Minio did not reveal the disk drives used but did say the drives were working full tilt and Aug 10, 2017 · The primary reason for this is that Ceph is held back due to its write performance, which forces the benchmark to throttle read requests to maintain the 70/30 mix. Ceph is very difficult to set up when we used it. These tools will provide some insight into how the Ceph storage cluster is performing. CRUSH replicates and rebalances data within the cluster dynamically—eliminating this tedious task for administrators, while delivering high-performance and infinite scalability. Looking at my current CEPH stats they are not even at 1-2 MiB/s. In the modern world, where big data and unstructured data brought new requirements, people have shifted from SAN’s high-cost performance to lower-cost, smarter approaches based on file and object Jun 30, 2014 · 113. So write should be ~1/3 the speed of your reads, but in practice they are slower than that. Feb 24, 2016 · Closed. Includes helm + rook-ceph + nginx ingress + the stock analysis engine (jupyter + redis cluster + minio + automated cron jobs for data collection) - works on Kubernetes version v1. Even dd and iozone seem to be fine for benchmarking NFS so maybe I don’t need to think about it too hard. I frequently get the same question from enterprise customers who say, “We heard this Ceph thing Minio and Ceph both fit this description, with the latter also offering block and file access. #9. Although it can run as a standalone server, it’s full power is unleashed when deployed as a cluster with multiple nodes – from 4 to 32 nodes and Jul 13, 2020 · Again, running Minio on a local network (actually same network switch) computer, just default setup pointing to a directory. 10. Azure Blob Storage - Use Azure Blob Storage to store all kinds of MinIO creates erasure-coding sets of 4 to 16 drives per set. I've seen a few toy S3 implementations. 0 License. In memory computing provides significant performance benefits, but brings along challenges for managing failures and scaling up. To do a write you need to compete the write to each copy of the journal - the rest can proceed asynchronously. When used in conjunction with high-performance networks, Ceph can provide the needed Sep 14, 2016 · Ceph Performance and Sizing Guide 1. The purpose of this section is to give Ceph administrators a basic understanding of Ceph’s native benchmarking tools. Some researchers have made a functional and experimental analysis of several distributed file systems including HDFS, Ceph, Gluster, Lustre and old (1. What are the recommendations to improve my cluster performance. The top reviewer of MinIO writes "Easy to manage and has a powerful API". glusterfs. Thefocusfor Rookistonot only bringing Ceph which is for block, filesystem and object storage, but also for persistence on a more application specific level by running CockroachDB and Minio through a Rook operator. docs: How minio compare to other object storage (Ceph, etc. Whether you would wish to attach block devices to your virtual machines or to store unstructured data in an object store, Ceph Compare MinIO vs Red Hat based on verified reviews from real users in the Distributed File Systems and Object Storage market. Trouble is, they usually don’t agree on which one is which. Ceph is described as 'distributed object store and file system designed to provide excellent performance, reliability and scalability' and is an app in the OS & Utilities category. 35 MB/s for a total of 58,202 IOPS (at the default 4k block size), and ran for MinIO is a high performance open source S3 compatible object storage system designed for hyper-scale private data infrastructure and can be installed on a wide range of industry standard hardware. Jun 25, 2020 · SAN vs Ceph: Performance In the legacy data centre, SAN arrays were prominently used for their performance as storage back-ends for databases. Minio did not reveal the disk drives used but did say the drives were working full tilt and Ceph pool configuration dictates the number of object replicas and the number of placement groups (PGs) in the pool. MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage) minio vs GlusterFS. 3. Native to Kubernetes, MinIO is the only object storage suite available on every public cloud, every Kubernetes distribution, the private cloud and the edge. less than a tenth of that for GlusterFS. Introduction Jul 28, 2020 · Comparison: GlusterFS vs. IDC Frontier deployed a Ceph stor-age system and conducted tests of its basic data read and write performance. As both Rook and Ceph are open source projects, IBM Cloud Private relies on the open source community to bring improvements and provide bug fixes, if needed. Azure Blob Storage - Use Azure Blob Storage to store all kinds of Interest over time of Minio and Ceph Note: It is possible that some search terms could be used in multiple areas and that could skew some graphs. iSCSI initiator. In such cases, Ceph is the best as it accesses the data rapidly. See side-by-side comparisons of product capabilities, customer experience, pros and cons, and reviewer Ceph-RGW, Minio, Riak CS and Swift are designed to performance of the services, the required time to carry out a set of operations was measured. The question of whether to use NFS with some not-ceph solution and CephFS is a big one. From my experience, I’m not sure comparing them by general performance is the right metric.