Wednesday, February 13, 2019

Successful Hardware Strategies for a Software-Dominated World - Dell EMC Certifications


Over the last couple of years, we have witnessed a massive transformation in the telecom industry with a shift away from proprietary, expensive IT equipment in favour of standard, cost-efficient computing blocks.

Software and Cloud rule


Thanks to these open standards, multiple virtual machines and multiple operating systems could be managed on a single, physical IT platform, which enabled software virtualisation. Now, enterprises could quickly update and upgrade networking functions without the need for expensive hardware swaps. This delivered increased flexibility, speed to market, agility and cost-reduction – all critical factors in an increasingly competitive, global market.

Today, this software trend continues apace. Business leaders are increasingly asking their IT teams to revamp existing solutions in favour of software-and cloud-centric approaches with virtualisation initiatives like software-defined networking (SDN) and network functions virtualisation (NFV) proving particularly popular.

A hardware strategy is still required


I believe that all these developments are hugely positive. However, it’s not an “either/or” scenario. In my book, hardware and software need to be part of a single over-arching strategy with hardware regarded as a critically important component in its own right. As the world swings towards disaggregated white boxes, I believe that more than ever, Network Equipment Providers (NEPs) and System Integrators (SIs) need a hardware approach that will deliver value while still supporting their customers’ NFV initiatives.

NFV is not all about software


After all, a true NFV solution needs a hardware ecosystem, featuring servers, storage, switches and networking. Due to bandwidth, latency and security issues, not everything can and will go to the Cloud. With 5G, and the proliferation of mobile phone and IoT devices together with the growth of high content delivery services, edge computing and IoT Edge Gateways will become increasingly important, particularly for analysis. This all points back to the importance of having the right hardware approach to set you up for success over the longer-term.

Best practices


When you have questions, who better to ask than the people directly at the coal face? And so, together with Intel, we commissioned AvidThink to conduct research and speak with leading telecom providers to understand current best practices in hardware strategy and likely future trends. Hot off the press, the report makes for interesting reading.

Three NEP solutions


From the core to the edge, the report confirms that NEPs need a strategy for handling hardware integration for Communication Service Providers (CSPs) and enterprises. In general, providers indicate that they will consume NEP solutions in one of three ways:

  • As tightly-integrated software and hardware stacks — even if the underlying platform is x86-based. This does not preclude the NEP from creating integrated bundles and, in some cases, these appliances could have built-in elements of hardware acceleration.
  • As pure software solutions, independent of the underlying hardware in a fully disaggregated scheme. This model sees CSPs deciding on the NFV infrastructure first and expecting the NEP software to execute seamlessly on the platform of choice.
  • In a hybrid approach, where the NEP provides soft-integration or pre-integration stacks with their software functions pre-certified on a specific supplier’s hardware platform.

All the indicators are that all three formats will persist for some time and across all locations: data center, mobile edge, and enterprise WAN edge with different locations likely to favor one format over powered by Intel® Xeon® Scalable processor

Hybrid – the best of both worlds


However, according to the report, the consensus is that a hybrid or soft-integration approach can reduce risk and provide benefits to all members in the value chain. In this configuration, NEPs would offer pre-integrated or pre-certified solution stacks to their customers. These are different from the branded, tightly-integrated appliances that were typical in many NEP solutions.

Instead, the hybrid format would see the NEP work with one or more hardware partners to ensure that their software solutions work well on specific platforms. These platforms could be uCPEs, single servers, or a full rack of servers.

Faster time to market and reduced risk


It’s true that in the past, system integrators performed similar tasks, putting together solutions by loading, integrating, and testing software on server platforms. However, and here’s the important point, this new hybrid approach involves moving these activities upstream in the value chain.

This research report shows that if you’re a NEP being pushed to become a purely software vendor, taking a hybrid strategy will deliver faster time to market, reduced risk during deployment, faster troubleshooting, and optimized performance.

Be future-ready


Adopting this strategy can also prepare you for the goal of delivering a fully disaggregated platform. A hybrid approach, coupled with the right hardware partner, ensures that you can provide end users with time savings, convenience, and the peace of mind that comes with a pre-integrated, pre-certified software and hardware stack.

As the world marches toward software-defined infrastructure and the industry ideal of disaggregation, it’s important to remember that these functions are still dependent on specific hardware platforms. I believe that a hybrid model that is future ready, open and democratized represents the best of both worlds.

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt



Tuesday, January 29, 2019

Accelerating AI and Deep Learning with Dell EMC Isilon and NVIDIA GPUs - Dell EMC Certifications


Over the last few years, Dell EMC and NVIDIA have established a strong partnership to help organizations accelerate their AI initiatives. For organizations that prefer to build their own solution, we offer Dell EMC’s ultra-dense PowerEdge C-series, with NVIDIA’s TESLA V100 Tensor Core GPUs, which allows scale-out AI solutions from four up to hundreds of GPUs per cluster. For customers looking to leverage a pre-validated hardware and software stack for their Deep Learning initiatives, we offer Dell EMC Ready Solutions for AI: Deep Learning with NVIDIA, which also feature Dell EMC Isilon All-Flash storage.  Our partnership is built on the philosophy of offering flexibility and informed choice across a broad portfolio.

To give organizations even more flexibility in how they deploy AI with breakthrough performance for large-scale deep learning Dell EMC and NVIDIA have recently collaborated on a new reference architecture that combines the Dell EMC Isilon All-Flash scale-out NAS storage with NVIDIA DGX-1 servers for AI and deep learning (DL) workloads.

To validate the new reference architecture, we ran multiple industry-standard image classification benchmarks using 22 TB datasets to simulate real-world training and inference workloads. This testing was done on systems ranging from one DGX-1 server, all the way to nine DGX-1 servers (72 Tesla V100 GPUs) connected to eight Isilon F800 nodes.

This blog post summarizes the DL workflow, the training pipeline, the benchmark methodology, and finally the results of the benchmarks.

Key components of the reference architecture shown in figure 1 include:

  • Dell EMC Isilon All-Flash scale-out NAS storage delivers the scale (up to 33 PB), performance (up to 540 GB/s), and concurrency (up to millions of connections) to eliminate the storage I/O bottleneck keeping the most data hungry compute layers fed to accelerate AI workloads at scale.
  • NVIDIA DGX-1 servers which integrate up to eight NVIDIA Tesla V100 Tensor Core GPUs fully interconnected in a hybrid cube-mesh topology. Each DGX-1 server can deliver 1 petaFLOPS of AI performance, and is powered by the DGX software stack which includes NVIDIA-optimized versions of the most popular deep learning frameworks, for maximized training performance.


Benchmark Methodology Summary


In order to measure the performance of the solution, various benchmarks from the TensorFlow Benchmarks repository were carefully executed. This suite of benchmarks performs training of an image classification convolutional neural network (CNN) on labeled images. Essentially, the system learns whether an image contains a cat, dog, car, train, etc.

The well-known ILSVRC2012 image dataset (often referred to as ImageNet) was used. This dataset contains 1,281,167 training images in 144.8 GB[1]. All images are grouped into 1000 categories or classes. This dataset is commonly used by deep learning researchers for benchmarking and comparison studies.

When running the benchmarks on the 148 GB dataset, it was found that the storage I/O throughput gradually decreased and became virtually zero after a few minutes. This indicated that the entire dataset was cached in the Linux buffer cache on each DGX-1 server. Of course, this is not surprising since each DGX-1 server has 512 GB of RAM and this workload did not significantly use RAM for other purposes. As real datasets are often significantly larger than this, we wanted to determine the performance with datasets that are not only larger than the DGX-1 server RAM, but larger than the 2 TB of coherent shared cache available across the 8-node Isilon cluster. To accomplish this, we simply made 150 exact copies of each image archive file, creating a 22.2 TB dataset.

Conclusion


Here are some of the key findings from our testing of the Isilon and NVIDIA DGX-1 server reference architecture:

Achieved compelling performance results across industry standard AI benchmarks from eight through 72 GPUs without degradation to throughput or performance.
Linear scalability from 8-72 GPUs delivering up to 19.9 GB/s while keeping the GPUs pegged at >97% utilization.
The Isilon F800 system can deliver up to 96% throughput of local memory, bringing it extremely close to the maximum theoretical performance limit an NVIDIA DGX-1 system can achieve.
Isilon-based DL solutions deliver the capacity, performance, and high concurrency to eliminate the IO storage bottlenecks for AI. This provides a rock-solid foundation for large scale, enterprise-grade DL solutions with a future proof scale-out architecture that meets your AI needs of today and scales for the future.

Our experts say about Dell EMC Certification Exams



Thursday, January 17, 2019

Sony Entertainment & Technology, and the Digital Transformation of Entertainment - Dell EMC Certifications


Ahead of the holidays, I had the chance to make one last customer/partner visit to the Sony Pictures studios lot, and what I saw blew me away. In the land of fantasy, celluloid and big ideas we met with Innovation Studios and saw yet another example of why one of this year’s biggest trends will be how data is continuing to reshape industries.

Dell Technologies, Intel, Deloitte and Sony Pictures Entertainment came together in 2018 to help create Innovation Studios.  My trip was to follow up on this and see Sony’s vision of the future and how they were using our technology to make it happen. Dell Technologies supported Innovation Studios with PowerEdge Servers, Dell EMC Networking, Isilon Storage, Dell workstations and VR gear. I saw just how much the analog and digital worlds are blurring through volumetric image acquisition, a process that produces a photo-realistic three-dimensional recreation of a physical environment.

That description fails to capture the extent of what I saw. Imagine the room you’re now standing in, pick an object and gaze at it. Now move your head from side to side and up and down. See how the light hits it and illuminates it differently as your focus shifts? Now imagine walking behind that object and disappearing or having the object cast a shadow onto you. Volumetric image acquisition allows a person to interact with an object that isn’t there. The scene is shot against a green screen, a method most are familiar with from major blockbusters. But unlike CGI being inserted during post-production, this technology displays the virtual background and the physical actor/props in real-time allowing the director to see the finished product as they are shooting.

This is a revolutionary way to tackle filmmaking that will increasingly find its way into blockbusters and indie films alike. Glenn Gainor, president of Innovation Studios, hopes to democratize storytelling by using technology to bring down the costs of filmmaking while creating immersive experiences that have previously been impossible to create.

The studio looks pretty standard at first sight: to the right is a mini data center humming full of our equipment, and to the left is production equipment, cameras and a small stage with a green screen. When Glenn flipped on the camera, the “Shark Tank” exit interview set magically appeared the monitors (this virtual set had just made its debut on the show a few days earlier). As the camera moved around the empty studio, it appeared we were right there on the “Shark Tank” set, our eyes completely fooled. A Variety reporter recently noted a similar reaction from a “Shark Tank” production manager.

Right now the rules are being broken. Inspirational leaders like Glenn are attempting to rewrite the story on how things are done. Imagine what this means for the entertainment industry. No more waiting until the lighting is just right. No more having to pay massive amounts to shoot on location. The ability to quickly spin up an environment for reshoots. This approach will deliver so much creative freedom and fundamentally change the way movies are made.

Every industry will go through a similar transformation, but incorporating technology into the culture and skill set of a company can be difficult. Organizations need strong technology partners who are committed to helping them achieve these digital outcomes. In the case of Innovation Studios, that meant a partnership with Dell Technologies, Deloitte and Intel to deliver the technology solutions that made this possible. Volumetric capture is a very CPU and GPU intensive process and reproducing it live requires substantial bandwidth in compute, networking and high-performance storage. The solution deployed features Intel Xeon-powered PowerEdge servers, Dell EMC Networking and Isilon storage on the back end, plus Dell Precision Workstations. Our team has worked very closely with Innovation Studios to ensure they had the right hardware configured to deliver the performance and scale this solution required. I can’t wait to see how this reshapes the future of filmmaking. These are exciting times!

Our experts say about Dell EMC Certification Exams



Thursday, January 3, 2019

Best Practices for Robust VMware Data Protection in Data Environment - Dell EMC Certifications


Everyone has heard the numbers.  There is more and more data and it keeps growing at an ever increasing rate.  According to the latest IDC estimates, the amount of data is expected to grow tenfold by 2025 to 163 zettabytes.  To thrive in today’s economy, organizations must make the most of their data and must also ensure that the data is protected and can be recovered quickly and cost effectively. And, since majority of enterprise workloads run on virtualized environments, with most of those workloads running on VMware, having robust VMware data protection is essential to the success of most organizations.

A number of best practices deployed as part of your data protection strategy can help you achieve this:

Automate protection of virtual workloads


With more data and more VMs being spun up at an ever increasing rate, you cannot rely on manual processes to ensure that all your new applications and VMs are protected.  You do not want busy application owners or vAdmins to have to manually create new protection policies or assign new workloads or VMs to existing policies.  And, you certainly do not want the lag time from the creation of a workload to raising a request with a backup or storage admin to configure backups of the workload.

Modern data protection solutions automate protection processes with support for dynamic mapping of vSphere objects to existing protection policies.  This means that your new VMs, based on criteria you define (such as DS cluster, data center, tags, VMname, etc.), are automatically assigned and protected upon creation with no human intervention necessary.

Self-service data protection


Your application owners and vAdmins are busy making sure that your business and mission critical applications are up and performing as intended.  

They do not want any delays with respect to data protection related tasks (backup policy changes, file and/or VM recoveries, etc.) by going through a backup or infrastructure admin and they certainly do not want to learn or log into another system or UI to do it on their own.  Enabling your application and vAdmins to perform data protection tasks directly from the native vSphere UI they are familiar with can go a long way in making their jobs, with respect to data protection, easier.

Modern data protection solutions provide deep integration with vSpehre to deliver self-service data protection workflows, enabling application owners and vAdmins the freedom to perform the vast majority of data protection related tasks directly from the native and familiar vSphere UI.

Distributing data across Software Defined Data Center nodes


Most data protection solutions handle data deduplication and distribution of data at the edge of the SDDC, after the data has been transferred from the source storage to the target backup storage.  This results in extensive network traffic and bandwidth usage.  It can also lead to the requirement of a separate expensive backup network and inefficient use of storage space.

Modern data protection solutions perform deduplication processing within the SDDC before the data is transferred to backup storage.  This allows the solution to scale-out with the SDDC and ensures data processing requirements do not fall behind as the SDDC scales. In addition, an ideal data protection solution should also utilize horizontally scaled out pools of protection storage, which results in more efficient allocation of software defined storage.  The result is less bandwidth usage and less storage capacity consumed for backing up your data.

Change block tracking restore capabilities


Change block tracking for backups is very common among today’s data protection solutions.  This means that for each subsequent VM backup, the solution only copies data blocks that have changed since the last backup.  This results in faster backups and less bandwidth consumed.

What is not very common, but important is change block tracking restore.  This allows the solution to track the difference between the current version of the VM and the version you are trying to restore.  It then only restores the changed blocks between the two.  The result is much faster VM restores and much less bandwidth consumed for those restores.  A solution that supports change block tracking restore enables new architectures without compromising recovery time objectives. One such architecture is the ability to backup and restore VMs across wide area networks. A modern data protection solution will have support for change block tracking restores.

Performance disaggregated from capacity


There is a trend in data protection solutions towards simplicity in the form of converged appliances that combine data protection software and backup storage, along with all the ancillary compute and networking, in a single converged appliance.  While this trend towards simplicity is admirable, for many data protection solutions, it has come at the expense of capacity and/or performance efficiency.

For many of these simple, converged solutions capacity and processing are linked.  Once you run out of processing power on an appliance – too many VMs to backup, too many backup streams needed to comply with SLAs, etc. – you need another appliance to add more processing even if you did not need the additional storage capacity.  Similarly, if you run out of storage capacity, you need another appliance and end up paying for the additional processing that you did not need. This is a common dilemma of architectures that scale capacity and performance linearly.

A modern data protection solution, whether it is in the traditional HW/SW form factor, SDDC or a converged appliance, disaggregates performance from capacity and allows each to scale independently.  Ideally, the backup processes should be load balanced across your entire SDDC environment and there should be no need to add or pay for additional processing if you need more capacity.

Dell EMC delivers software defined, automated data protection


With Dell EMC Data Protection, you don’t have to compromise on performance, automation, architecture or efficiency for the sake of simplicity.  Our solutions, whether you are deploying our Data Protection software and Data Domain backup storage or our new Integrated Data Protection Appliances, each utilize our software defined data protection architecture to deliver on each of the above best practices.

Dell EMC Data Protection solutions deliver:


  • Automation of VM protection with dynamic mapping to protection policies
  • Leading self-service data protection for VMware environments with our native vSphere integration
  • Minimal bandwidth usage and maximum storage efficiency with our industry leading edge and target deduplication technology that delivers up to 55:1 storage efficiency and up to 98% bandwidth efficiency
  • Change block restore for lighting fast VM recovery
  • Disaggregated performance from capacity with distributed virtual proxies that are automatically provisioned, deployed, and load balanced across the SDDC environment

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt 



Monday, December 17, 2018

IT Infrastructure, Operations & Cloud Strategies Conference 2018: Dell EMC Storage Perspectives


As we wrap up the year of 2018, it’s always nice to squeeze in one more conference to talk about Dell EMC storage products. For us, this was Gartner’s IT Infrastructure, Operations & Cloud Strategies show in Vegas. This conference traditionally gets C-level executives, storage administrators, industry analysts and decision makers together for four days to hear thought-leadership and strategies regarding technology solutions. This year was no different.

According to Gartner’s numbers, the show pulled in over 2500 attendees and 120 + exhibitors with Dell EMC being one of them. As I staffed the booth to talk to customers, analysts, and professionals alike, I kept hearing, “Dell EMC, I love you guys” and “I just bought one of these,” as the attendees looked at our equipment in the booth. Simply put, people are excited about our technology and that is exactly what you want to hear at a show like this one.

A lot of the conversations I had were with individuals who simply wanted to know more about our products. Most of them were pre-existing customers that are looking forward to the future and wanted to make sure they were looking at the best storage products for their data centers.

One customer I spoke with, from a financial institution, really wanted to see a Dell EMC PowerMax as they had recently purchased a PowerMax 8000. They are beyond excited to take delivery and start moving their data to the world’s fastest storage array!

Another customer that came by the booth was getting ready to make the move to Dell EMC Isilon. This customer works for a healthcare institution and needed the perfect solution for medical imaging storage. The hospital had used EMC products in the past and was looking forward to getting their hands on Isilon.

Of course, what also makes this show so valuable is the thought-leadership presentations that are given to the attendees. Dell EMC’s Varun Chhabra presented on Data Capital. This is the idea that a company’s data is a capital asset that offers extreme value. We offer solutions that allow customers to unlock their Data Capital. Dell EMC’s Brian Payne spoke about the future and today’s emerging trends of IoT, AI, AR/VR, and Blockchain. He focused on the importance of data-driven decision making and how we can use AI to drive change that keeps customers ahead of their competition. With today’s digital economy, these were both extremely valuable presentations to attend.

Our time at the Gartner IT Infrastructure, Operations & Cloud Strategies Conference was time well spent. It is always great to get out there and speak with our customers and potential new ones about our storage products and the value and performance they can bring to their data centers. Dell EMC is a valuable partner in a customer’s journey in IT transformation and from what I heard, a partner is exactly what many customers are looking for.

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt



Tuesday, December 11, 2018

How Digital Transformation Helps Us Get There First - Dell EMC Certifications


We are engaged in a war of algorithms; a battle fought in cyber space that also plays out across air, land, and sea every day. Digital transformation is the key to winning because it gives us a critical advantage: the ability to execute before the adversary can.

This “decision advantage” comes, in part, from embedding technology into the mission at the service of the warfighter. Technology transformation at the kinetic level, for example, makes efforts at the tip of the spear more successful. Imagine real-time AI-processed reconnaissance information optimizing ordinance activity on-target. Or turning our ships at sea into floating data centers: optimizing communication, battlefield insights, ship defenses, onboard maintenance, and medical care for our wounded warriors.

Today, across the department and in all branches of the U.S. military, IT leaders are looking for solutions to turn their legacy IT footprint into a modern multi-cloud environment.  This transformation will also bring sweeping changes to our workforce.  Tomorrow’s pilot will need to be as good at multi-mode IT systems management as actually flying an aircraft.

Technology transformation with the Department of Defense (DoD) means looking at where computer activity needs to take place. This could include activity in a data center, or on a sensor, drone, mobile device, aircraft, and even an office-based workstation. Where this processing activity, called a ‘workload’ takes place should be optimized for the mission – and not optimized for the convenience of the IT purchasing process. Mission-optimized IT includes Domestic DOD-managed cloud environments and data centers, ad hoc IT networks in forward operating positions with disadvantaged communication, or on the battlefield itself.

In support of this transformation, a multi-cloud approach allows the military to deploy infrastructure that is secure and flexible for mission-critical projects. One such example is a recent Defense Department effort to build out a secure, on-premise cloud solution within its existing data center footprint. Outdated and unsupported legacy IT systems were eating up already-scarce funding and leaving our warfighters and their mission exposed to the adversary.

Dell EMC is honored to have partnered with DoD in this effort, known as the On-Site Managed Services (OMS) program. It provides high-availability, high-performance, mission-critical compute services. This cutting-edge IT transformation program allows the DoD to manage their most sensitive workloads and provide compute and processing wherever the mission requires.

OMS illustrates the point that mission success is all about operation and accessibility, requiring different approaches for each unique workload. With a complex map of challenges and mission-critical considerations, the DoD must continue to approach cloud on a workload-by-workload basis for IT modernization success, appreciating cloud as an operating model.

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt



Wednesday, December 5, 2018

DES-3611 Specialist – Technology Architect, Data Protection Exam


Overview


This exam is a qualifying exam for the Specialist – Technology Architect, DataProtection (DECS-TA) track. This exam focuses on recommending and designing data protection solutions using
the latest Dell EMC Data Protection products. Dell EMC provides free practice tests to assess your knowledge in preparation for the exam. Practice tests allow you to become familiar with the topics and question types you will find on the proctored exam. Your results on a practice test offer one indication of how prepared you are for the proctored exam and can highlight topics on which you need to study and train further. A passing score on the practice test does not guarantee a passing score on the certification exam.

Products


Products likely to be referred to on this exam include but are not limited to:
• Data Domain 6.1
• Avamar 18.1
• Data Protection Suite for
  Applications 4.6
• IDPA 2.2/2.3
• RecoverPoint for VMs 5.1
• NetWorker Virtual Edition 18.1
• Avamar Virtual Edition 18.1
• Isolated Recovery Solution 1.0
• NetWorker 18.1
• DPA 18.1
• Enterprise Copy Data Management
  2.1
• RecoverPoint 5.1
• Data Domain Cloud Disaster
   Recovery 18.1
• Cloud Snapshot Manager July/2018
• Data Domain Virtual Edition 4.0

Exam Topics


Topics likely to be covered on this exam include:

Dell EMC Data Protection Product Features, Functions, Software-based
Architectures and/or Components (31%)
• Identify and describe the available tools and services to assess a customer's
  environment for a data protection solution
• Describe the Dell EMC Cloud Data Protection Solutions features, functions,
   and/or architecture/components
• Describe the Dell EMC Data Domain features, functions, and/or
  architecture/components
• Describe the Dell EMC NetWorker features, functions, and/or
  architecture/components
• Describe the Dell EMC RecoverPoint and RecoverPoint for Virtual Machines
  (VMs) features, functions, and/or architecture/components
• Describe the Dell EMC Integrated Data Protection Appliances (IDPA)
  features, functions, and/or architecture/components
• Describe the Dell EMC Data Protection Suite for Applications (DPS) features,
  functions, and/or architecture/components
• Describe the Dell EMC Isolated Recovery Solutions features, functions,
  and/or architecture/components

  Dell EMC Data Domain Solutions Design (8%)


• Identify and describe the best practices for capacity planning, performance
  tuning, sizing, and designing a Dell EMC Data Domain data protection
  solution

Dell EMC NetWorker Solutions Design (8%)


• Explain Dell EMC NetWorker capacity planning and performance tuning
• Identify and describe the best practices for sizing and designing a Dell EMC
  NetWorker data protection solution

Dell EMC Avamar Solutions Design (7%)


• Explain Dell EMC Avamar capacity planning and performance tuning for a
  data protection solution
• Identify and describe the features, functions, and the best practices for sizing
  and designing a Dell EMC Avamar data protection solution

Dell EMC RecoverPoint and RecoverPoint for Virtual Machines (VMs) Solutions
Design (17%)


• Explain the replication concepts and replication planning for a Dell EMC
  RecoverPoint and Dell EMC RecoverPoint for VMs data protection solution
• Explain Dell EMC RecoverPoint and Dell EMC RecoverPoint for VMs capacity
  planning for a data protection solution
• Identify and describe the best practices for designing a Dell EMC
  RecoverPoint and Dell EMC RecoverPoint for VMs data protection solution