The document discusses software defined datacenters. It explains that software defined datacenters separate the control plane from the hardware using software that allows infrastructure services to be consumed as programmable hardware and software. This approach abstracts intelligence from individual hardware components like storage, servers, and networking to create pools of resources that can be delivered as virtual services. It further discusses how this model enables scalability, dynamism, elasticity, automation, and the integration of new applications.
The document discusses EMC and Microsoft's integrated backup solution with deduplication. It highlights key capabilities like end-to-end support for Microsoft platforms, the evolution from tape-based to disk-based backup using deduplication, and integration with products like Microsoft DPM and EMC NetWorker. Deduplication is shown to dramatically reduce storage needs by eliminating redundant data.
Huawei Symantec Oceanspace S2600 OverviewUtopia Media
The document provides an overview of the Huawei Symantec Oceanspace S2600 storage system. It discusses challenges facing data storage such as the exponential growth of data. The S2600 is positioned as an intelligent storage solution for SMBs, governments and other organizations. It features high reliability, easy management, flexible configurations and data services. The S2600 uses a dual-controller design, supports various host ports and disk types, and integrated data protection and management software.
This document provides an overview of Integrity NonStop, HP's solution for mission-critical computing. It discusses how (1) IT sprawl is limiting business performance and innovation, (2) HP's converged infrastructure approach can overcome sprawl and achieve high service levels, and (3) Integrity NonStop is HP's innovation for rock-solid reliability and flexibility in mission-critical environments.
The document discusses a partnership between Cisco and Citrix to deliver virtual desktop solutions. It outlines their mission to lead the market in virtual experiences for enterprise and cloud computing. It then provides details on Cisco and Citrix virtualization technologies, use cases, best practices, and performance metrics to showcase scalability, rapid provisioning, networking and security capabilities of their integrated solutions. The document is intended to promote the benefits of Cisco and Citrix virtualization platforms to potential customers.
POWER VM with IBM i and live partition mobilityCOMMON Europe
IBM Power Systems provide virtualization capabilities through PowerVM. PowerVM allows customers to run multiple logical partitions (LPARs) on a single physical server. This improves resource utilization and reduces costs compared to a traditional one partition per server model. Key PowerVM technologies include the hypervisor, dynamic logical partitioning, shared processor pools, virtual I/O server, and live partition mobility. The virtual I/O server (VIOS) allows client partitions like IBM i to leverage virtualized storage, network, and other I/O resources rather than each requiring physical adapters.
This document discusses network virtualization and OpenStack Networking (Quantum). It provides an overview of OpenStack Networking concepts like virtual networks, ports, and subnets. It also describes the plugin architecture and various networking plugins. The document outlines how OpenStack Networking can be extended to support layer 3 constructs and hybrid cloud networking. It provides examples of networking architectures using plugins like Cisco Nexus with OpenStack Networking.
Visibility & Security for the Virtualized EnterpriseEMC
Identifying and understanding high-value digital assets in the context of the business is critical in assessing what work-loads to move to the cloud. But doing so is difficult without an effective model to help define and classify these assets. This session presents a down-to-earth methodology for identifying assets and understanding their value that you can apply in critical business decisions.
Objective 1: Understand what to look for when identifying valuable information assets.
After this session you will be able to:
Objective 2: Identify critical steps in the process of identifying and understanding digital assets.
Objective 3: Apply asset value when deciding what digital assets to entrust to the cloud.
Full recording via http://www.brainshark.com/emcworld/vu?pi=zHJzQJGhyzB8sLz0
OSGi is a module system and service platform for Java that provides a dynamic and flexible environment for developers. It allows software to be organized into independent bundles that can be updated without restarting the entire system. Bundles define their dependencies and interfaces, and services allow bundles to collaborate by providing and consuming functionality. The OSGi framework handles loading, dependency management, and lifecycle of bundles, and provides a service registry for bundles to publish and discover services. OSGi addresses the need for a modular architecture in large Java applications and enables continuous software evolution.
IBM i client partitions concepts and implementationCOMMON Europe
This document discusses IBM i client partitions hosted by an IBM i host partition. It provides an overview of iVirtualization and new enhancements, and covers HMC-based setup, considerations, and customer examples. The agenda includes an overview of iVirtualization, new enhancements, HMC-based setup, things to consider, and customer examples. Installation of IBM i hosting clients on Power systems can begin with the HMC-based quick install guide. Virtual versus physical hardware resources and an overview of IBM i virtual client partitions are also reviewed.
Stott & May is a specialist recruitment firm focusing on high-frequency resourcing for key accounts. They have a dedicated team of delivery consultants and account managers to service client accounts. Some key aspects that make them different include their speed of response, dedicated account managers, discretion, talent retention expertise, and experienced recruitment specialists. The document provides overviews of their IT infrastructure, software development, risk management, finance technology and other specialty practices. It also includes testimonials praising the skills of some of their consultants.
Virtualization technology is described as the platform for cloud computing. The document discusses how virtualization enables cloud computing by allowing the creation of shared resource pools and optimization of infrastructure. It also talks about how virtualization can help data centers evolve into internal clouds through the use of a Virtual Data Center Operating System (VDC-OS). Finally, it provides examples of how VDC-OS and VMware vSphere can provide services, policies, and integration of resources to enable availability, security, and scalability.
En esta presentación analizamos varias herramientas de administración disponibles en Microsoft para mejorar nuestra infraestructura.
Ing. Eduardo Castro Martinez, PhD
Microsoft SQL Server MVP
http://ecastrom.blogspot.com
http://mswindowscr.org
http://comunidadwindows.org
This document discusses solutions for improving I/O performance in virtualized data centers and cloud computing environments. It outlines how server virtualization can create I/O bottlenecks and describes emerging technologies like SSDs, high bandwidth networks, and intelligent software that can help address these issues. SSDs in particular are presented as a cost effective way to improve performance for database and transactional applications by providing more IOPS. The role of new storage class memory technologies in filling the performance gap between DRAM and HDDs is also examined.
VMware ESXi 3.5 update 2 is a next generation, thin hypervisor that is available for free. It partitions servers to create robust virtual machine environments with improved security, reliability and simplified management compared to previous versions. The free version provides many of the features of VMware Infrastructure 3, including support for virtual appliances and virtual machines. It has received positive feedback from customers for its plug-and-play installation and configuration capabilities.
Key to Efficient Tiered Storage InfrastructureIMEX Research
The document discusses how solid state storage can improve tiered storage infrastructure. It covers how SSDs enable new systems architectures and improve transaction response times. The document also discusses workload characterization, applications that benefit from SSDs, data mapping, and automated storage tiering software. The key topics are how SSDs are poised to play a key role in efficient tiered storage and next-generation data center architectures.
Brocade: Storage Networking For the Virtual Enterprise EMC
The document discusses storage networking technologies for virtualized environments. It summarizes Brocade's Fibre Channel fabrics for scaling SANs across data centers through technologies like In-Chassis Links (ICLs) and Ethernet fabrics for supporting protocols like FCoE, iSCSI, and NAS. It also discusses capabilities for improving metro connectivity, automating management through tools like Brocade Network Advisor, and enhancing performance for virtual desktop infrastructures (VDIs) and other emerging workloads.
彭—Elastic architecture in cloud foundry and deploy with openstackOpenCity Community
This document discusses elastic architecture in CloudFoundry and deploying PaaS with OpenStack. It provides an overview of CloudFoundry's architecture pattern with loosely coupled components that can scale out independently and communicate via messages. These include routers to route requests, nodes to run applications and services, and components like the cloud controller, health manager, and droplet execution agent. It emphasizes principles of self-governance, loose coupling, and the ability to run on different infrastructures like OpenStack.
The document discusses private cloud and VCE infrastructure packages. It explains that VCE is a coalition between Cisco, EMC and VMware to accelerate virtualization and private cloud deployments through pre-integrated and tested solutions. It provides an overview of VCE's Vblock infrastructure packages which deliver standardized and predictable IT infrastructure as a service.
The document discusses distributed block-level storage management for OpenStack. It describes the Cloud OS storage system from ITRI, including the distributed main storage (DMS) and distributed secondary storage (DSS) subsystems. DMS provides high performance primary storage using data replication across storage nodes for reliability. DSS provides backup and restore capabilities along with deduplication and wide-area replication for disaster recovery. The Cloud OS storage system is integrated with OpenStack to provide its storage capabilities.
This document describes virtualization solutions using Microsoft Hyper-V and System Center with EMC storage components. It provides configuration details for solutions supporting 50 and 100 virtual machines, including servers, hypervisors, networking, storage and backup components. It also discusses features for virtualizing Microsoft applications and the benefits of using System Center for management.
Breakout session tijdens Proact's SYNC 2013.
VSPEX en vBlock Converged Infrastructure bouwblokken van hypervisor server network en storage.pptx
John Lavallée
Practice Mgr – Cloud Services EMEA
EMC | Global Services Partners
This document discusses building a private cloud. It begins with an introduction to cloud computing that outlines deployment models including private, public and hybrid clouds. It then discusses the differences between public and private clouds, with private clouds being dedicated to an organization and having easier security and migration than public clouds. The presentation goes on to demonstrate Microsoft's System Center Virtual Machine Manager 2012 product for deploying a private cloud infrastructure from zero.
The document discusses EMC and Oracle's long-standing partnership in developing solutions to optimize Oracle applications. It outlines three common deployment models for Oracle (aggregation, verticalized, virtualization) and describes the benefits of virtualizing Oracle software, such as 3x higher performance with lower total cost of ownership. It also introduces EMC solutions like Vblock infrastructure platforms, FAST automated storage tiering, and VFCache server flash caching that help address challenges of Oracle I/O performance and optimize storage for virtualized Oracle environments.
This document discusses VMware vSphere 4.0, which provides virtualization capabilities and serves as a cloud operating system. It allows for high consolidation ratios through features like VMotion that enable live migration of virtual machines between physical servers. vSphere optimizes computing resources through features in vCompute, vStorage, and vNetwork that improve performance, availability, security and scalability for applications.
EMC's IT's Cloud Transformation, Thomas Becker, EMCCloudOps Summit
The document discusses EMC's transformation to an IT-as-a-Service model. Key points include:
1) EMC transitioned IT from an infrastructure focus to applications focus and now a business focus, optimizing IT production for business consumption.
2) This involved virtualizing servers and applications, consolidating data centers, and achieving 90% virtualization of OS images.
3) The transformation aims to provide agility, cost savings, and a 1 day application provisioning time through a service-oriented IT-as-a-Service model.
The document discusses EMC's storage transformation solutions including their VMAX, VPLEX, and SRM products. It provides an overview of the VMAX family and its performance and capabilities. Specific models like the VMAX 40K are highlighted. The document also discusses new software features for VMAX including federated tiered storage and recoverpoint integration. Benefits of solutions like FAST VP and its cost savings are promoted. VPLEX and Recoverpoint technologies are described as enabling access from anywhere and data protection everywhere. Management tools like Unisphere and ProSphere are also summarized.
The world’s information is doubling every two years. In 2011 the world created a staggering 1.8 zettabytes. By 2020 the world will generate 50 times the amount of information and 75 times the number of "information containers", while IT staff to manage it will grow less than 1.5 times. This session introduces students to various storage networking, & business continuity terminologies.
This session reviews what customers need to know about using EMC products in an OpenStack cloud OS environment. This session also reviews which products are available for use in Folsom, Grizzly, and upcoming OpenStack releases. We briefly examine best practices for those products in lab and production deployments. Finally, we will consider where EMC's future participation in OpenStack may lead.
Objective 1: Understand what OpenStack open source software is, how it can be used as a framework to build private clouds, and how EMC is contributing to this open source community.
After this session you will be able to:
Objective 2: Identify which EMC products are available for use in OpenStack today, and how those products add value to private cloud solutions that use OpenStack.
Objective 3: Understand requirements, configurations, and best practices for deployment of EMC products in both lab and production environments.
EMC VSPEX for Virtualizing Your Data CenterCTI Group
This document discusses EMC's VSPEX proven infrastructure reference architectures. It provides an overview of the IT consumption model shifting towards reference architectures and single SKU solutions. VSPEX offers validated and tested configurations for private cloud, virtual desktop infrastructure, and applications. The document highlights VSPEX strategic partnerships with Cisco, provides examples of VSPEX sizing and configurations, and positions VSPEX solutions for virtualized environments optimized for VMware.
Getting Started Developing with Platform as a ServiceCloudBees
The document provides an overview of getting started with platform as a service (PaaS). It begins with the speaker's background and then discusses how cloud computing represents an inevitable shift similar to the transition to electricity. The rest of the document focuses on explaining key concepts regarding PaaS, including differences between infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It also provides recommendations for getting started with PaaS.
OneCommand Vision 2.1 webcast: Cutting edge LUN SLAs, AIX on PowerPC and flex...Emulex Corporation
Our customers look to Emulex OneCommand™ Vision for improved I/O performance and availability. The first part of our Performance Assurance Webinar Series will focus on how they can increase their performance and, ultimately their competitiveness, with the upcoming release of OneCommand Vision.
OneCommand Vision 2.1 includes expanded OS support, including AIX, as well as powerful LUN SLA monitoring that reports Class of Service, Path Availability and I/O size specific latency reporting.
Our Performance Assurance Webinar Series features case studies and tips most relevant to today’s data center needs. Join us to discover how you can use OneCommand Vision 2.1 to achieve Performance Assurance on your most critical servers and get the most out of your applications.
Build Scanning into Your Web Based Business Applicationbgalusha
Learn about the new EMC Captiva Cloud Toolkit, a software developer kit (SDK) that allows web application developers to quickly add scanning and imaging functionality directly to their web-based business applications. Learn how partners are leveraging the toolkit to deliver Web-based scanning solutions.
Introducing OneCommand Vision 3.0, I/O management that gives your application...Emulex Corporation
Emulex's OneCommand Vision is a storage management tool that provides visibility into application-level I/O performance across the entire storage area network. Version 3.0 expands support to monitor more host-side and directly-attached storage devices, offers a portfolio of products to meet different customer needs, and provides more detailed performance reporting and alerting functionality. The new release aims to give users improved insight into I/O issues affecting application performance across diverse multi-protocol environments.
För att ge rätt kontext till den här presentationen behöver vi börja på en lite högre nivå och bara prata liite om Cloud… Vi haar gjort det länge och idag är det inte bara prat, vi använder det också. Vad är då drivkratferna bakom Cloud? Några stycken är.. SkalbarhetDynamik & Elasticitet, att snabbt vid behov kunna utöka eller minska kapacitet och därmed kostnaden AutomationNya applikationer mm.. Idag kommer jag prata en hel del kring Software Defined Datacenter och det råder en del begreppsförvirring
Idag råder en hel del begreppsförvirring.. Cloud, Public, private och hybrid modeller.. Software defined etc… För att reda ut vad de olika begreppen innebär för oss så ser vi det som att IT as a Service eller cloud har att göra med operationella modeller… Hur vi konsumerar IT. Detta är den långsiktiga visionen som finns för att generera och konsumera IT på det mest effektiva sättet. Software Defined Datacenter är en teknisk plattform för att enklare kunna leverera konsumptionsbaserad IT eller Cloud eller vad vi vill kalla det. IaaS, PaaS SaaS mfl.. Den här presentationen kommer att handla om software defined datacenter med ett fokus kring storage. Det är ju fmycket prat om software defined everything nu förtiden och varför är det så? Vi tar och tittar tillbaka lite i tiden..
Fysisk begränsning - Vi tar en server som exempel för fysiska element.KlickOavsett hur kraftfull server du har så finns det en begränsning i hur mycket den kan hantera.. Någon gång kommer du till en gräns då antingen du själv inte kommer att sova gott eller om du inte har problem med sömn så tar det ett tag till tills servern har nått sin gräns och inte längre kan leverera enligt SLA. KlickAutomation –Nästa del som blir en begränsning i fysisk infrastruktur är svårigheten att automatisera förändring av fysiska konstruktioner eller komponenter. I virtuell hårdvara är det inga problem att addera ytterligare nätverkskort, disk, CPU, minne etc förutsatt att operativsystemet stödjer det. Samma sak är betydligt mer problematiskt i en fysisk hårdvara. KlickMobilitet - Idetta exempel pratar vi om en server, den behöver ju inte vara jättetung, men mobilitet är nästa begränsande faktor som jag vill lyfta fram. En server kanske inte är något problem att flytta, men tänk om vi skall flytta den 50 mil? Hade den vart virtuell skulle vi flyttat den via nätverket eller i värsta fall en extern disk. Nu blir det i stället UPS eller liknande. Om vi vill flytta alla våra servrar till en extern leverantör, vore det inte bra om dessa var virtuella? Det finns såklart en massa andra fördelar med virtualisering så som hårdvaru abstraktion enkapsulering mm.. Men för denna diskussion tror jag argumenten räcker. KlickLösningen - Vi behöver någonting som kan bryta fysikens lagar och överbrygga begränsningarna och ändra datacentret så som vi känner det idag… Detta är något som görs bit för bit och låt oss kika på detta. KLICK förrändringar, låta infrastrukturen förändra hur de olika elementen skall vara konfigurerade för att köra optimalt.
ServrarÖver tiden så har serverplatformen genomgått en stor resa. Från mainframe till virtualiserad x86 platform med allt det innebär. En extremt viktig faktor är standardisering till intel x86 arkitekturen som även återfinns inom andra delar av datacentret som tex storage. En annan viktig faktor är övergången till en mer distribuerad arkitektur för att få en skalbar, flexibel miljö. Vi behöver inte gå in mer på detta nu. KlickNätverk Tittar vi på nätverkssidan så händer liknande saker… Man kan dra en parallell mellan serversidans mainframe till nätverkssidans Chassibaserade Core.. Utvecklingen som vi ser här är att gå från Chassi baserad Core till Distribuerad Core. Man konstruerar virtuella chassin med standardswitchar i Spine/ Leaf konstruktioner som ger möjlighet till extrem skalbarhet och stor flexibilitet. Två stora faktorer driver på denna utveckling och det är SDN samt standardisering. Om vi tar ett standardformat som 1Us 10GbE switch så har nästan alla tillverkare samma chipset där idag. Varför det? Tidigare plöjdes stora summur R&D pengar ner i att bygga egna ASIC´s, proprietär hårdvara med varje tillverkares USPar djupt begravna nere i hårdvarustacken. Själva tanken med software defined är att befria varje hårdvarukomponent från specifik intelligens och lyfta ut den till ett ovanförliggande lager. I nätverkets fall innebär det att all nätverkskonfig ligger i en egen kontroller utanför alla switchar. Med andra ord, så behöver man inte längre manuellt konfigurera varje enhet i nätverket. Sedan finns det en massa andra fördelar så klart. KlickSäkerhetPå säkerhetssidan så har vi ramverket vShield där VMware med hjälp av olika API:er släppt in partners inom säkerhets-ecosystemet att exekvera på i hypervisor lagret utanför själva attackytan vilket är omöjligt i en fysisk implementation. Denna approach kompletteras även här med att lägga intelligensen utanför en begränsande hårdvara. Virtualiserade säkerhetsfunktioner kan enklare skalas ut och anpassa sig efter dynamiken i applikationslagret.. KlickVad som händer i storagelagret är det vi ska titta närmre på i denna presentation.
In the mostly physical world, it often took weeks to fully deploy a new application. [Note: you might want to choose if you want to talk about deploying apps or services. Some people prefer the term “services” because it implies background “services” that apps depend on, as well as apps. Analysts will often refer to it that way.]<click>In a mostly virtual world, it’s certainly better. Many customers can set-up and deliver new applications within hours. But there’s still a lot of work to be done.<click>Setting up a new VM is easy. It’s instant. The difficult part is all the surrounding infrastructure services you need to support that new app. It has dependencies on storage, networking, security. It has requirements around availability and business continuity that must be taken into consideration. It’s all these other services that take the most time, not getting a VM deployed!<click>What if we could shrink down that problem, and make it as simple and straightforward as it is to configure and deploy a new VM. Make it almost “automatic”, by having a fully dynamic, software-driven datacenter.<click>If we can capture a set of policies that can drive and automate the provisioning of all the infrastructure services needed for a new application, and capture it in a container – a virtual datacenter, or VDC – then we can achieve our goal of deploying new applications within minutes or seconds.
What all of this is adding up to is that Virtualized Software is Replacing Specialized Hardware as the core material in designing the modern datacenter. In short, differentiated value lies in how software interacts and integrates, passing hardware control from proprietary management tools to higher level orchestration tools via open programmable APIs rather than deterministic, proprietary, scriptable CLIs.
Exempelpåettflödeinom software defined datacenter. Härkommerettexempelpåhurdetkanfungera… Låtsägaatt du är skallprovisioneraupp 50 virtuellamaskinerför test ochutvecklingeller en utbildningsmiljöellerliknande. <Click> I dennanyamodellgår du då in I en självbetjäningsportalochbeställerdinaservrar. Vi förutsätterattbeställningengodkännsoch vid dennapunkt tar mjukvaranöver. <Click> Orkestreringsmjukvaraanvändsförattkoordineraskapandetochleveransenutavdessatjänster. <Click> Vid användningav en EMC/Vmware stack skulleportalenochorkestreringenutgörasavvCloudAutomationcenterochvCloud Orchestrator. DessaskulleanvändavCloudAPIerföratt be vCloud Director attprovisioneradessaservrar till rättaffärsenhetochdessisoleradebubbla I detvirtuella datacentret. vCloud Director skulleanvändavCenter APIs förattanvända templates I vCenter. Om vi antarattlagringssytemetsupporterar VAAI ,vilketallaemc system gör, såskullevCentergöra en API request till storage attskapa 50 kopiorav de block somutgör din template. Vid dennapunktskullevCenterregistreradessasomunika VMs startaigångdemochresultatetskulle sedan skickasgenomstackenhelavägen till dig somatttransaktionen är genomfördochattdinamaskiner ¨ållerpåattbootaupp. Noteraatt den endamänskligatransaktionenvar I börjannär du beställdedinamaskiner. Allt annatsköttesgenomprogramerbarmjukvaraochhårdvarasomkoordineradesav en orkestreringsmotorfrån start till mål.
Det traditionella datacentret beståroftaavolikateknikochapplikations silos. Ofta är detapplikationenochdessägaresomdikterarkravensom sedan trattasner I olikateknologiskavalförinfrastrukturen. Man kan se appägarnasombortskämdatonårsbarnoch vi är curlingföräldrarna… Varje app har sin egnavertikal med CPU, OS, storage, nät, säkerhetochäven management system. Om vi fortsätteratt CURLA våraapplikationsägarepådettasättkommerinfrastrukturenattblimerspretigochkomplexvilketleder till att vi behövermerresurserförattklappaominfrastrukturenochhållalampantänd. Det blir kostsamt, såmycketsomupp till 70% av OPEX gåråt till detta I många fall. Click
Vadsombehövergörasärattomvandla de olikafysiskasilokonstruktionerna till pooler avgemensammaresurser. Pooler avlagring, server, nätverkochsäkerhetsomärcentraltmanagerat. Idagharnästanalla 80-90% avvadsomkrävsförattåstakommadetta. Detmestasombehövergörasärrelaterat till processer ochorganisatoriskafrågor. Närdettaärgjortkan man bytafokusfrånattkonfigureraochövervakafysiskhårdvara till attjobbamer med affärsunderstöd. Nu har vi kommit till ettlägedär den fiiinaochmycketkompetentahårdvaranfår en nedgraderad roll. Vi behöverintelängrebryossomvilkenmodellellerfärg en hårdvaruprylharlängre.. Detkrävsmycketavhårdvaranföratt den I slutändanskalluppnåstatusenattblinedgraderad till hårvarupryl. Hårdvaranmåstevaraså pass välintegrerad med ovanförliggandeinfrastrukturatt den isigkanförståvilkenkapacatitetprestandaochtillgänglighetsomhårdvaruprylenkanlevereraoch sedan förändrakonfigurationenvarteftersomdetbehövsföratthålla SLA kraven. Hurserdetdåut med integrationerfrån EMC´s sidaupp till Vmware? KLICK
Alternate shorter slide to previous slide … make up your own talk track focusing on multi-year evolution focusingon integration and passing up control … baby steps, over time, leading to SDDC.
All of these things … billions of dollars of R&D … lead us to slides like this. Now its important to look at this through the right lens. This isn’t here to simply flash to the customer, thump our chest and say that we are #1, but rather it is a critical proof point in our most important differentiator. In the current world, the question of why EMC isn’t answered with some list of feature and functionality, but rather our greatest asset is that we have an invisibility cloak. Our software better integrates with everything in the broader IT stack making the lives of our customers easier. Our deep integration allows customers to do the following …
As the Cloud Operations model evolves towards the Software-Defined Data Center, we see greater levels of abstraction of the underlying infrastructure hardware, with greater levels of intelligence moving up the stack. Infrastructure resources are now virtualized and accessed via open RESTful APIs. EMC and VMware have been enabling this type of functionality through VMware APIs (VASA, VAAI, VADP) for the past couple years, allowing greater visibility between Server/VM and Storage. This abstracted, API-driven models will continue to evolve for Networks, Security and other management elements.
Just as we laid out a step-by-step model to help customers on their “journey to the cloud”, we will also lay out a model to manage the transition to the new management models needed for Cloud Operations. [Phase 1] At the foundation is the automation that is needed within each of the technology areas (server, storage, network and security). By leveraging tools that not only provide granular management, but also can integrate with the virtualization platform, IT groups can begin to deliver more agile operations and automate repetitive tasks. [INSERT DISCUSSIONS ABOUT EMC PROSPHERE, UNISPHERE, IONIX, SMARTS, NCM, WATCH4NET][Phase 2] Begin to leverage the Cloud Operations functionality that is available with the virtualization platform. This will treat the infrastructure as pools of resources, allowing IT to manage demand and performance in a more cost-effective, flexible way. [INSERT DISCUSSION OF VMWARE VCLOUD SUITE]
PROJECT BOURNE will allow Cloud Operations to manage heterogenous storage environments, working closely with VMware Software-Defined Data Center architecture components (vCLOUD SUITE). BOURNE provides the control-plane (not the data-plane) for Cloud Operations interaction with VMware. Data plane interaction is still between the Application/VM and the Storage systems using standard protocols.
Mycket I datahallen är idagvirtualiserat, men fortfarandeanvändssammamekanismerföratttilldelaochhanteralagring. Dettakommer VMware attändrapå! Det jag skaprataom nu är framtid, detfinnsinteidag, men detkommertroligen under nästaår. Vad är detdåsom är pågång? Vmwarejobbarpå en ny storage stack där sto9ra förändringarkommerattgöras. Delsförbättringarför den traditionellalagringen, SAN / NAS, men man tittarävenpåattkomplettera med en distribueradinfrastrukturbestående av interndisk (DAS). Det är trehuvudkategorier vi skalltittapå. klickVirtual Volumes ellervVolsKlickKlickVirtual Flash, där intern flash disk blir en Tier 1 resurssomkanstyrasliktminneoch CPU resurs I en host. KlickSamt virtual SAN – distribueradeinternadiskar.. Kommerattpassa bra förnyaapplikationerdärnärhet till disk är viktigttexhadoop. Tillsammansutgör de Software Defined Storage.
Visionen för Software Defined storage är att kunna samla resurser från alla typer avlagring SAN, NAS, DAS & Flash i stora virtualiserade lagrings pooler. Man ska kunna upprätta lagringsenheter baserat på tjänsteprofiler istället för att anpassa sig efter fysiska begränsningar som det är idag. Varje Virtuell maskin får den kapacitet, tillgänglighet och prestanda som just den behöver. Tjänsteprofilen appliceras på varje enskild VMDK, inte per datastore eller LUN / filsystem som det är idag. Varje objekt för sin tjänsteprofil och den är policydefinitionen för all automation som sker i lagringen.
Genom att virtualisera Server flash och konvertera det till en resurspool i klustret precis som man gör med
In the mostly physical world, it often took weeks to fully deploy a new application. [Note: you might want to choose if you want to talk about deploying apps or services. Some people prefer the term “services” because it implies background “services” that apps depend on, as well as apps. Analysts will often refer to it that way.]<click>In a mostly virtual world, it’s certainly better. Many customers can set-up and deliver new applications within hours. But there’s still a lot of work to be done.<click>Setting up a new VM is easy. It’s instant. The difficult part is all the surrounding infrastructure services you need to support that new app. It has dependencies on storage, networking, security. It has requirements around availability and business continuity that must be taken into consideration. It’s all these other services that take the most time, not getting a VM deployed!<click>What if we could shrink down that problem, and make it as simple and straightforward as it is to configure and deploy a new VM. Make it almost “automatic”, by having a fully dynamic, software-driven datacenter.<click>If we can capture a set of policies that can drive and automate the provisioning of all the infrastructure services needed for a new application, and capture it in a container – a virtual datacenter, or VDC – then we can achieve our goal of deploying new applications within minutes or seconds.