Panasas’s double NAS provide goals at a number of analytics workloads


Panasas has broadened its scale-out NAS provide to embody high-performance and capability choices with normal availability of ActiveStor Flash and ActiveStor Extremely XL. The 2 merchandise intention at a variety of workloads by way of file measurement and I/O profile that fall throughout the high-performance computing (HPC) to synthetic intelligence/machine studying (AI/ML) continuum.

Speaking to ComputerWeekly.com, the corporate additionally revealed the boundaries of its curiosity in object storage, in addition to its ideas on cloud storage, the place it has no presence at present.

The Panasas ActiveStor techniques have been tailor-made to a variety of workloads, which might imply file storage profiles that go from many, many very small information to a smaller variety of very massive ones.

ActiveStor Flash is a solely NVMe flash-based {hardware} equipment geared toward smaller file sizes the place speedy entry is required. Its ASF-100 nodes are 4U kind issue and take as much as 3.84TB of M.2 and 46TB of U.2 NVMe. DRAM and NVDIMM provide quicker cache-level technique of storing working information.

In the meantime, ActiveStor Extremely XL is geared toward bigger capacities and greater file sizes. An ASU-100XL node runs to 160TB – however quadruple that for minimal configuration – principally comprised of spinning disk HDD plus some quicker M.2 NVMe capability.

The 2 techniques, each working PanFS, have benefited from controller OS and file system upgrades in model 9.2 that enable for the client to deploy storage blades beneath a single namespace. “However with volumes created to go well with workloads of differing I/O traits – so, smaller and quick, or cooler and bigger – beneath one single pane of glass,” stated Curtis Anderson, software program architect at Panasas.

He added: “We have been a one-platform firm till Could. Then we had two new platforms that are constructed on the power to make use of a number of media varieties, with metadata going to NVMe for instance, SSDs for small information as much as 1.5MB and HDD for giant information.”

The Panasas title for the performance is Dynamic Disk Acceleration, which is the automated route to totally different tiers of storage.

The rationale for the shift? “The problem was, what if a buyer is working HPC and desires to run one other workload?” stated Anderson.

The enhancements to PanFS enable for that and the engineering behind it was, stated Anderson, a “moderately- sized raise” that concerned refactoring PanFS to deal with new {hardware} varieties and to pick and qualify these merchandise to be used with the system.

However what about object storage, on condition that a lot unstructured information – Panasas’s bread and butter – is now in object storage format?

Anderson stated: “Panasas is constructed as a Posix file system however on prime of an object retailer, which was developed by 1999, so earlier than Amazon’s S3. It has the traits of scaling and development, and many others, that object storage has, however we don’t provide entry. It really works in a different way to S3.”

Advertising and marketing and merchandise VP Jeff Whitaker added: “Object storage is of curiosity, however on the subject of how the overwhelming majority of individuals entry information, it’s file-based. The event aspect of AI/ML typically occurs within the cloud, nevertheless, so it’s undoubtedly one thing we’re desirous about as we transfer ahead.”

In a context the place the cloud is changing into more and more essential and lots of suppliers provide the chance to retailer information within the cloud, what’s the Panasas technique right here?

The corporate is firmly nonetheless within the on-prem {hardware} camp however, as with object storage, it’s taking a look at prospects, stated Whitaker. “Proper now, we’re an appliance-based datacentre platform, not software-only, and from what we’ve seen out there, 85-90% of the market remains to be on-prem.”

He added: “Clients battle to get efficiency from cloud-based storage. Cloud suppliers need to throttle storage so their networks aren’t saturated. Completely, clients are shifting to the cloud and doing extra there, so we’re taking a look at totally different eventualities and dealing with S3, with companions.”



Source link

Leave a Reply

Your email address will not be published.