WOW! Day 1 at Accelerate was seriously EPIC! 25 new software features and a bunch of other fun announcements! A ton of innovation is about to hit the streets in the 9 months. Brace yourself!
Lets dig into some of the announcements!
- Symmetric Active/Active: Read and write at either side of the mirror, with optional host-to-array site awareness.
- Transparent Failover: Non-disruptively failover between synchronously replicating arrays and sites.
- Async Replication Integration: Uses async engine for baseline copies and resynchronizing. Convert async relationships to sync without resending data.
- No Bolt-ons & No Licenses: No additional hardware required, no software licenses required, upgrade Purity and go!
- Simple Management: Perform data management operations from either side of the mirror, provision storage, connect hosts, create snapshots, create clones.
ActiveCluster brings three different solutions:
- Campus HA / Data Center HA: Local active / active replication to enable live migrations and rack level HA with a maximum of 0.5ms round trip latency. Typically this deployment will be for arrays adjacent to each other. Paths are presented to all hosts as Active/Optimized and MPIO can use any path to each array.
- Metro HA: Metro level active / active replication to enable sync replication across the WAN with a maximum of 5ms round trip latency. Paths are presented to local hosts as Active/Optimized, but paths to remote hosts as Active/Non-Optimized. MPIO keeps IO on Optimized paths.
- Global HA: Extension of synchronous replication with Asynchronous replication to a third site. This extends replication outside the 5ms requirement for global deployment.
In traditional Pure Storage fashion, the deployment is truly simple. Just 4 steps to get this up and running. Check this out:
1. Connect the arrays
> purearray connect –type sync-replication
2. Create a stretched pod
> purepod create pod1
> purepod add –array arrayB pod1
3. Create a volume
> purevol create –size 1T pod1::vol1
4. Connect hosts
> purehost connect –vol pod1::vol1 host
As more and more applications get consolidated onto a single platform, QoS becomes a critical feature. Previously we released an “always-on” or “no touch” QoS solution which automatically protected the system from impact of a noisy neighbor. The process added artificial latency to the noisy neighbor to ensure there was available performance for the other LUNs. This awesome awesome because there was no configuration required from the user.
We are enhancing this solution with the introduction of performance classes and limits to assure both minimum performance (guarantees) for first class workloads and maximums (throttles) that a given volume can consume. This keeps things simple and easy, but gives the user ability for more fine grain configurations.
VVol Integration (aka Virtual Volumes):
Virtual volumes has been a feature discussed for a while and we are looking forward to see what customers do with it. Our new VVols implementation has been updated with the new VASA provider running HA and stateless on the array. This functionality enables VMFS to VVol migration, so you can get up and running quickly and easily. This also adds in additional functionality for VM granular snaps, cloning, replication, migration from array to array, and QoS, all of which are all accelerated my Pure.
Snap and CloudSnap:
We are extending our snapshot functionality to include off array portability. This allows snapshots to be offloaded to another FlashArray, FlashBlade, NFS target (of any kind), backup product via our DeltaSnap API, or public cloud in cloud native formats (S3, EBS, Glacier, etc). All metadata is stored with the snapshot, enabling it’s portability. The model here is to give customers the plumbing to move data in and out of the cloud, or to alternate technologies that may have lower economics. This will augment traditional backups. The power of cloud native formats is huge. Think about converting your snapshot to an EBS volume, attaching an EC2 instance to it, and you are ready to rock! Our first partner here is AWS, but expect GCP and Azure to follow.
Windows Storage Server on Purity//RUN:
Customers have been asking for file services on FlashArray forever, and we are finally announcing Windows Storage Server running on Purity//RUN. This enables all Windows file functionality on top of Pure Storage resiliency and performance. Bring your own license and we will fully support it via our Purity-optimized WFS image. All the features of the FlashArray are supported including data reduction, snapshots, and replication.
The DirectFlash Shelf enables expansion of the FlashArray//X via a 50/100Gb RoCEv2 (RDMA over Converged Ethernet) connection. This enables native NVMe of Fabrics between the FlashArray//X and the DirectFlash Shelf. In additional to expansion, this solution will allow direct NVMe over Fabrics to hosts to enable top of rack flash solutions with an NVMe target. Looks for some awesome innovations in the future with this solution.
Cisco and Pure have tested and validated a full NVMe over Fabrics solution via Cisco’s RMDA VIC card. Check out tomorrows keynote for more information on this deployment, as we dig deeper into NVMe over Fabrics across the stack.
We believe that tomorrow’s cloud block to include both file, object, and block solutions, with super low latency end-to-end and rack scale for any environment. In a 42U rack, you can pack a serious punch for those hyperscale environments. 1,300 cores, 50TB or DRAM, and 2.6PB per rack. WOW!
New 17TB FlashBlades:
Some people saw the 100TB to be too small and the 1.6PB system to be too big, so we found a blade size that is just right. Our 17TB blades are now available giving customers a 500TB capable array with a 15 blade system!
I am still shocked about the density and performance of FlashBlade, and I talk about it everyday. Check out the new scale of FlashBlade:
- Simple and intuitive deployment and management
- Scale performance and capacity instantly with adding a blade at a time
- Up to 8PB namespace – in a 20 rack unit configuration
- Support for billions of files and objects
- 75GB/s Read / 25GB/s Write at 7.5M IOPS
- Super simple deployment with integrated software define networking
- Evergreen Storage supported
For those that don’t have a 95″ width rack, we also have the 19″ rack version! 🙂
FlashBlade Fast Object Store:
Object storage has never been known for being fast, in fact, quite the opposite. But applications today need consistency at high concurrency. With Fast Object Storage on FlashBlade is 10X faster with small deployments. Where you see serious challenges is when you start to scale. Metadata slows down the system. That said, at 1 million objects, FlashBlade performance >100X faster than any other solution on the market today. This allows cloud native application developers to get consistent results, no matter the scale. Simple and easy, without concerns……
Some major updates to Pure1 giving customers more visibility, reporting, and proactive support.
Pure1 Global Dashboard:
First is visibility across the entire enterprise or fleet of arrays with a global dashboard, showing data reduction, average load, and capacity analysis. In addition, you also have visibility of all alerts and support tickets across all arrays in one single pane of glass.
Planning to add additional workloads to the array? Need to forecast load or capacity? Check out our new AI engine META. Enables workload planning to predictions to optimize arrays and help with defining strategies for future deployments with ease.
Pure1 META Workload Planner:
What an amazing day! Come check out the keynote tomorrow for more fun…….
Until next time, stay flashy my friend!