NodeWeaver takes advantage of an extremely sophisticated and flexible storage layer, based largely on the LizardFS distributed filesystem, and capable of performing reliably and in an extremely efficient way even for small number of nodes and disk drives. There are several important differences between our approach to storage and the most common choices taken in other hyper-converged infrastructures, or even in dedicated storage systems; this post will try to outline some of them.
NodeWeaver has been designed, from scratch, as a single optimized entity – using the idea (first taught me at my university by my electrical communication professor, during a discussion on Trellis coding) that when you look at optimizing only a single step you are not doing larger, potentially much important optimization of the whole system. It’s a common problem, and in some cases it comes down to the issue of economics: when you pool resources together you may be able to get more performance or higher utilization, since many times you need a resource to be available, but you don’t need them to be available all at once. In fact, you aim for the maximum possible performance, not the average. You want your car to go fast when you need it (for example when surpassing a slower vehicle), despite the fact that on average your car travels much slower – the average speed of cars in London is a mere 19km/h.
Madrid, Spain, October 5th, 2015 – CloudWeavers and OpenNebula Systems team up to build turnkey hyperconverged infrastructures for next-generation enterprise computing. NodeWeaver is a drop-in replacement for traditional virtualization systems, avoiding the need for expensive and complex SANs and traditional storage systems. Thanks to its integrated LizardFS distributed file system, NodeWeaver replaces the outdated concepts of LUNs and fixed allocations with a single “sea of data”, from which the integrated OpenNebula orchestrator can manage the virtual machines and the associated disk images.