Dell ObjectScale XF960
The Next Evolution Enterprise Object Storage for Kubernetes
Overview:
Extreme performance at scale for emerging workloads like Generative AI and real-time analytics
Dell ObjectScale XF960 is enterprise-class, all-flash object storage that is the first member of the ObjectScale X-Series appliance family. Built with NVMe-based SSDs on a 16th-generation Dell PowerEdge server, the XF960 appliance delivers extreme performance at scale for emerging workloads such as generative AI, machine learning, IoT and realtime analytics applications. The XF960 hardware stack includes servers, network switches, rack-mount equipment and appropriate power cables, all optimized to run ObjectScale software.
The XF960 embraces the NVMe-OF (non-volatile memory express over fabrics) protocol for its blazing-fast 100Gb backend network, accelerating node-to-node communication and unlocking the true potential of the all-flash system’s throughput rate, especially in large-scale deployments. Its combination of scale and performance is exactly what organizations need to train their algorithms with more data than ever.
Simplifying the Dell Object customer experience for the era of AI.
We're excited to share we'll be converging ECS with ObjectScale as a single platform. The new experience will carry over everything customers love from ECS - including its codebase, architecture, Ul and APIs - with a simple upgrade to the modernized ObjectScale operating environment. This new-generation platform will balance exceptional performance and value, supporting the range of modern object requirements.
What is Object Storage?
Gartner defines file systems and object storage as a distributed design-based software and hardware platform that supports object and/or scale-out file system technology to meet the growing demands of unstructured data. This market is based on a distributed computing architecture with no single points of failure or contention throughout the system. More specifically, the product should have a fully distributed architecture. In this architecture, data and metadata are distributed, replicated, or erasure-coded across multiple nodes in the cluster. When managing a multi-petabyte scale system, it is important to be able to scale out the capacity and throughput of the cluster by adding independent nodes to the global namespace / file system.
Object storage, according to Gartner, also refers to systems and software that store data in "objects" and offer it to clients using RESTful HTTP (application programming interface) APIs, such as Amazon Simple Storage Service (S3), which has become the de facto standard for object storage access.
Software-Defined Storage
SDS is a storage architecture that separates storage and software from the underlying hardware infrastructure. Unlike traditional network attached storage (NAS) or storage area network) (SAN) systems, SDS is typically designed to run on industry standard systems, and the software does not rely on proprietary hardware.
Decoupling storage software from its hardware allows you to expand your storage capacity as needed, rather than adding another piece of proprietary hardware. You can also upgrade or downgrade hardware at any time. Basically, SDS offers a great deal of flexibility.
In most cases, SDS should have:
- Automation: Simplified management to keeps costs down.
- Standard interface: An API for management and maintenance of storage devices and services.
- A virtualized data path: Interfaces for block, file, and object that support applications written to these interfaces.
- Scalability: The ability to scale out storage infrastructure without impeding performance.
- Transparency: The ability to monitor and manage storage use while keeping track of available resources and costs.
Benefits of Software-defined Storage:
- Flexibility in hardware selection: The SDS you choose does not have to be from the same company that sold you the hardware. You can use any commodity server to build your SDS-based storage infrastructure. This means that you can maximize the capacity of your existing hardware as your storage needs grow.
- Cost efficiency: SDS is distributed and scales out instead of scaling up, allowing you to adjust capacity and performance independently.
- You can join many data sources to build your storage infrastructure: You can network object platforms, external disk systems, disk or flash resources, virtual servers, and cloud-based resources (even data dedicated to workloads) to create a unified storage volume.
- SDS can adjust automatically based on your capacity needs: SDS doesn't depend upon hardware; automation in SDS is, well, automatic in the sense that it can pull from any storage volume it’s connected to. Without administrator involvement, new connections, or new hardware, the storage system can adapt to data needs and performance.
- Limitless Scalability: Traditional SAN are limited to the number of nodes (devices with assigned IP addresses) they can use. SDS, by its very definition, is not similarly constrained. That means – theoretically – it is infinitely scalable.
Pricing Notes:
- Pricing and product availability subject to change without notice.