Have you done any research into performance differences with multiple smaller LUNs and fewer larger LUNs? I'm just unclear on what the best practices would be in this scenario: - 1 server running VMWare ESXi with 4 iSCSI network connections - 1 EMC SAN system with 4 iSCSI network connections (everything is redundant) - On the ESXi host I have 5 virtual machines and each machine needs, for argument's sake, 2 virtual hard drives.
So should I just create a single giant LUN and have ESXi manage virtual hard disk files on it or should I create 5 LUNs and have ESXi manage 2 virtual hard disk files on each LUN or should I create 10 LUNs and have ESXi manage a single virtual hard disk file per LUN?
Did some research and testing on this before :)asavitzkMarch 16 2010, 14:43:19 UTC
You want smaller luns so rebuild times won't be ghastly in case of drive failures. We had 6 drive failures and in each case you get a rebuild to bring the spare online and then another rebuild when you replace the faulty drive(they really needed a floating spare now that I think of it) :) I think the 10 Luns is the best performance options as well in terms of disk access times. We made 3 5-drive luns and bound them in a meta-lun for each database server and the performance was slightly better than the side by side disk tray on the blade servers.
Comments 5
Have you done any research into performance differences with multiple smaller LUNs and fewer larger LUNs? I'm just unclear on what the best practices would be in this scenario:
- 1 server running VMWare ESXi with 4 iSCSI network connections
- 1 EMC SAN system with 4 iSCSI network connections
(everything is redundant)
- On the ESXi host I have 5 virtual machines and each machine needs, for argument's sake, 2 virtual hard drives.
So should I just create a single giant LUN and have ESXi manage virtual hard disk files on it or should I create 5 LUNs and have ESXi manage 2 virtual hard disk files on each LUN or should I create 10 LUNs and have ESXi manage a single virtual hard disk file per LUN?
Reply
-Jim
Reply
Reply
Reply
Leave a comment