Last week, Maggie Jones published an article on SearchConvergedInfrastructure.com titled “Hyper-converged infrastructure market isn’t right for every shop”. I agree with the assertion in the title and the points she raises, with the clarification that they are based on assumptions that apply to vendors who only sell an integrated HCI appliance – a predefined, proprietary bundle of hardware and software - and not to software-defined HCI products.
The chief information officer of Langs Building Supplies, Matthew Day, says that while many people in the early days saw cloud as a magical bullet to managing complex infrastructure, it came with some nasty side effects, such as latency and cost-creep.
“Hyperconverged infrastructure allows people to have public cloud-like management with the performance of on-prem,” Day says. “At Langs, we have used public cloud providers for disaster recovering and backup, but now we are even phasing that out and growing our hyperconverged infrastructure to cover these bases as well.
“I think a lot of service providers like Fujitsu, IBM, and Interactive are getting wise to this, as well as Google with their cloud platform. They have to offer services that provide a rich user performance experience. AWS and Azure are sorely lacking in this regard.”
As places like factories, big stores, and logistics warehouses get increasingly instrumented and automated, demand for computing capacity to process the data those instruments generate in real-time is growing, fueling the rise of micro-data centers, which provide a relatively simple way to extend the edge of the corporate network to these facilities.
The trend toward cloud computing might dominate all the headlines in the tech media, but on-premises workloads are still all the rage among enterprises, according to a new survey by the Uptime Institute.
OpenNebula is the open-source platform of choice in the converged data centre, providing a simple, albeit flexible and powerful, cloud manager that supports traditional IT features and the dynamic provisioning of modern enterprise cloud. With thousands of deployments, OpenNebula has a very wide user base that includes leading companies in banking, technology, telecom and hosting, and research.
Latest news from the OpenNebula project, highlights from the community and the dissemination efforts carried out in the project this past month in this monthly edition of your favourite cloud management platform’s newsletter.
NodeWeaver takes advantage of an extremely sophisticated and flexible storage layer, based largely on the LizardFS distributed filesystem, and capable of performing reliably and in an extremely efficient way even for small number of nodes and disk drives. There are several important differences between our approach to storage and the most common choices taken in other hyper-converged infrastructures, or even in dedicated storage systems; this post will try to outline some of them.
NodeWeaver has been designed, from scratch, as a single optimized entity – using the idea (first taught me at my university by my electrical communication professor, during a discussion on Trellis coding) that when you look at optimizing only a single step you are not doing larger, potentially much important optimization of the whole system. It’s a common problem, and in some cases it comes down to the issue of economics: when you pool resources together you may be able to get more performance or higher utilization, since many times you need a resource to be available, but you don’t need them to be available all at once.