IoT/edge computing represents a significant area of growth, with profound impact for those who deliver these solutions and the customers who will use the insights gleaned to propel their businesses forward.Read More
New technologies such as IoT and cloud architectures are driving computing to the edge. Companies must embrace this trend in order to survive.Read More
HCI's easy implementation and management help admins solve some of the challenges of edge computing, such as distributed endpoint management and a trend toward data silos.Read More
Everyday devices are becoming more powerful, reducing data center loads and complementing—or in some cases leapfrogging—cloud capabilities to drive exciting new IoT applications.Read More
Nearly half of enterprises do not save money with the public cloud, and say they have figured out how to beat public cloud costs with their own private cloud.Read More
Hyperconverged appliances and a thirst for subscription-based pricing – an outgrowth of the convenience and scalability made possible by public clouds – are transforming the notion of private clouds. Traditional capex procurement cycles and long-term software licensing are giving way to more nimble opex alternatives, and to consumption models that mimic public cloud with pay-as-you-use economics.Read More
Last week, Maggie Jones published an article on SearchConvergedInfrastructure.com titled “Hyper-converged infrastructure market isn’t right for every shop”. I agree with the assertion in the title and the points she raises, with the clarification that they are based on assumptions that apply to vendors who only sell an integrated HCI appliance – a predefined, proprietary bundle of hardware and software - and not to software-defined HCI products.
The chief information officer of Langs Building Supplies, Matthew Day, says that while many people in the early days saw cloud as a magical bullet to managing complex infrastructure, it came with some nasty side effects, such as latency and cost-creep.
“Hyperconverged infrastructure allows people to have public cloud-like management with the performance of on-prem,” Day says. “At Langs, we have used public cloud providers for disaster recovering and backup, but now we are even phasing that out and growing our hyperconverged infrastructure to cover these bases as well.
“I think a lot of service providers like Fujitsu, IBM, and Interactive are getting wise to this, as well as Google with their cloud platform. They have to offer services that provide a rich user performance experience. AWS and Azure are sorely lacking in this regard.”Read More
Florida Atlantic University’s Tech Runway® has selected its fifth and largest Venture class of startup and early-stage companies to participate in its business accelerator program.Read More
As places like factories, big stores, and logistics warehouses get increasingly instrumented and automated, demand for computing capacity to process the data those instruments generate in real-time is growing, fueling the rise of micro-data centers, which provide a relatively simple way to extend the edge of the corporate network to these facilities.Read More
The trend toward cloud computing might dominate all the headlines in the tech media, but on-premises workloads are still all the rage among enterprises, according to a new survey by the Uptime Institute.Read More
OpenNebula is the open-source platform of choice in the converged data centre, providing a simple, albeit flexible and powerful, cloud manager that supports traditional IT features and the dynamic provisioning of modern enterprise cloud. With thousands of deployments, OpenNebula has a very wide user base that includes leading companies in banking, technology, telecom and hosting, and research.Read More
Latest news from the OpenNebula project, highlights from the community and the dissemination efforts carried out in the project this past month in this monthly edition of your favourite cloud management platform’s newsletter.Read More
What is the future of the cloud computing market? A long future, growing but with lots of time needed before IT becomes mostly cloud based.
Let’s start with the IT market. It is roughly (depending on who measures) $3.5T (with a T, as in trillion).Read More
NodeWeaver takes advantage of an extremely sophisticated and flexible storage layer, based largely on the LizardFS distributed filesystem, and capable of performing reliably and in an extremely efficient way even for small number of nodes and disk drives. There are several important differences between our approach to storage and the most common choices taken in other hyper-converged infrastructures, or even in dedicated storage systems; this post will try to outline some of them.Read More
NodeWeaver has been designed, from scratch, as a single optimized entity – using the idea (first taught me at my university by my electrical communication professor, during a discussion on Trellis coding) that when you look at optimizing only a single step you are not doing larger, potentially much important optimization of the whole system. It’s a common problem, and in some cases it comes down to the issue of economics: when you pool resources together you may be able to get more performance or higher utilization, since many times you need a resource to be available, but you don’t need them to be available all at once.Read More
CloudWeavers and OpenNebula Systems team up to build turnkey hyperconverged infrastructures for next-generation enterprise computing. NodeWeaver is a drop-in replacement for traditional virtualization systems, avoiding the need for expensive and complex SANs and traditional storage systems.Read More