You are viewing a single comment's thread from:

RE: A developer’s introduction to the Hive Application Framework

in HiveDevs2 years ago (edited)

Hey, I would be interested in any experience you have on a mid-tier setup such as using NVMe as cache LVs for a stripped rotational disk array. I have used them before but they usually only work for subsets of data that either only keeps hot for a period of time and then very rarely will be accessed or data streams in append mode where usually there is very little re-reading of new data written.

Databases are a mess for the above workloads but if the cache LV is big enough to work with the tables being hot, they keep hot on the cache and hardly need to be read from rotational disk.

With the above scheme, you could easily configure a 1TB NVMe for cache and a couple of either small cheap SSDs or even rotating disks for the high capacity part.

Just curious if you guys explored these options and what was your end conclusions.

Thanks

Sort:  

We haven't tested it here because we're generally trying to test a max performance setup to see where the potential performance bottlenecks are, but a HAF setup like that should be fine.

I see... I will probably have to rely on such for now, but yeah, in a few years' time, NVMes might just have replaced most storage types (IOps/$ wise) under the 100TB range.

Curious if you guys used external storage for the NVMe's (fiber or SAS attached) or something like this? Which is something I wanna grab later in the year.

image.png

Yes, we use the very ASUS board you showed (and a couple other similar ones by MSI and Cablecc) that allow you to use one PCI 16x slot in "bifurcated mode" to connect 4 2TB gen4 nvme drives, which we typically configure into a 4x raid0 8TB drive.

We've been using Ryzen 3960X and more recently 5950X systems that support PCI4 drives. We buy pre-tested bare-bones 5950X systems from a systems integrator, then populate them with better drives and a full complement of memory once we get them (integrators tend to overcharge for these latter items).

Nice 👍 thanks! AMD still beats Intel on memory access and IO subsystem (PCIe lanes), and price then. Good to know that it’s not just me on the HPC market.

I will be looking close for when the PCIe 6.0 gets implemented because that will change the protocol. But still too soon as the market is delayed due ASIC shortage. Let’s see.

I am as well on the same path.. going to stay on the AM4 socket and jump to the highest core one, since I already have plenty memory to play with and there’s still some time until the new PCIe 5 get out to market.