This document provides an overview and summary of key points from a presentation on designing virtual infrastructures and hypervisors. It discusses pre-requisites, assessing which servers are good candidates for virtualization, measuring server performance, determining the right amount of RAM for virtual machines, different types of virtualization technologies, high availability options, and live migration capabilities.
This document provides guidance on designing a virtual desktop infrastructure (VDI). It discusses key decision points around the hypervisor, servers, and storage. It recommends determining user groups, applications, and requirements through piloting before finalizing the design. The document also analyzes options for the hypervisor, servers including CPU, memory, and local storage considerations, and storage including the impact of VM density and hidden capacity needs. Monitoring IOPS and latency is emphasized as critical to ensuring a successful VDI deployment.
This document discusses high availability, disaster recovery, and backup considerations for Microsoft Hyper-V virtual machines. It covers Hyper-V architecture, anatomy of a virtual machine, challenges with backing up virtual machines including transactional consistency, and different approaches to backups including file-level and image-level. It also discusses high availability options for Hyper-V like live migration and replication, and disaster recovery strategies ranging from days to immediate recovery depending on budget and needs.
Building vSphere Perf Monitoring ToolsPablo Roesch
This document discusses building performance monitoring tools for VMware vSphere using the vSphere APIs. It begins with an overview of common use cases for monitoring CPU, memory, disk, and network performance. These include monitoring high CPU ready times, memory ballooning vs swapping, disk latency, and network throughput. The document then covers techniques for building applications that collect performance data using the vSphere APIs. It provides examples of useful metrics and how to identify issues like CPU overcommitment. The target audience is described as system administrators and VMware partners looking to integrate performance monitoring into their own tools.
This document discusses virtualizing tier 1 applications. It begins by showing how virtualization adoption has increased significantly for mission critical applications. It then discusses specific steps and considerations for virtualizing tier 1 applications, including:
1. Ensuring the platform (hardware, virtualization software, etc.) can adequately support the application.
2. Ensuring the people and processes are in place to design, implement, operate and troubleshoot the virtualized application. This includes discussing skills, support models, change management and monitoring.
3. Reviewing the application itself and existing reference architectures to understand virtualization best practices and sizing for that application. The goal is to virtualize at the application layer rather than physical server layer.
The Best Storage For V Mware Environments Customer Presentation Jul201Michael Hudak
Server virtualization is being widely adopted throughout the industry. Server virtualization places new demands on the storage infrastructure that should be considered early in the design process. NetApp provides storage and data management solutions that uniquely enable effective server virtualization environments, and which further extend the benefits of server virtualization. In this presentation, we’ll review why NetApp is the best storage solution for virtualized server environments.
Five things virtualization has changed in your dr planJosh Mazgelis
Are you still rolling with the changes? Virtualization has made a huge impact on the way we deploy our computer workloads, and with that it has also changed the ways in which we protect them. The business continuity plans in place for IT even just five years ago look very different than what many companies have in place today. Keeping on top of these changes will help you understand your recovery capabilities, and your limitations as well. Join us with our friends at Neverfail and make sure you're keeping your IT business continuity plans spicy and fresh!
Vizioncore Economical Disaster Recovery through Virtualization1CloudRoad.com
Virtualization enables more affordable disaster recovery for SMBs. Previously, having a duplicate backup site required duplicate expensive hardware and infrastructure. With virtualization, virtual machines can easily be copied to alternate backup sites for quick recovery in the event of failure. Testing backups is also simpler through virtual machine replication. Vizioncore provides virtualization solutions like vReplicator that optimize replication speeds and storage usage to enable cost-effective disaster recovery for SMBs through virtualization.
VMworld 2013: Implementing a Holistic BC/DR Strategy with VMware - Part TwoVMworld
VMworld 2013
Jeff Hunter, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Ken Werneburg, VMware
This document provides guidance on designing a virtual desktop infrastructure (VDI). It discusses key decision points around the hypervisor, servers, and storage. It recommends determining user groups, applications, and requirements through piloting before finalizing the design. The document also analyzes options for the hypervisor, servers including CPU, memory, and local storage considerations, and storage including the impact of VM density and hidden capacity needs. Monitoring IOPS and latency is emphasized as critical to ensuring a successful VDI deployment.
This document discusses high availability, disaster recovery, and backup considerations for Microsoft Hyper-V virtual machines. It covers Hyper-V architecture, anatomy of a virtual machine, challenges with backing up virtual machines including transactional consistency, and different approaches to backups including file-level and image-level. It also discusses high availability options for Hyper-V like live migration and replication, and disaster recovery strategies ranging from days to immediate recovery depending on budget and needs.
Building vSphere Perf Monitoring ToolsPablo Roesch
This document discusses building performance monitoring tools for VMware vSphere using the vSphere APIs. It begins with an overview of common use cases for monitoring CPU, memory, disk, and network performance. These include monitoring high CPU ready times, memory ballooning vs swapping, disk latency, and network throughput. The document then covers techniques for building applications that collect performance data using the vSphere APIs. It provides examples of useful metrics and how to identify issues like CPU overcommitment. The target audience is described as system administrators and VMware partners looking to integrate performance monitoring into their own tools.
This document discusses virtualizing tier 1 applications. It begins by showing how virtualization adoption has increased significantly for mission critical applications. It then discusses specific steps and considerations for virtualizing tier 1 applications, including:
1. Ensuring the platform (hardware, virtualization software, etc.) can adequately support the application.
2. Ensuring the people and processes are in place to design, implement, operate and troubleshoot the virtualized application. This includes discussing skills, support models, change management and monitoring.
3. Reviewing the application itself and existing reference architectures to understand virtualization best practices and sizing for that application. The goal is to virtualize at the application layer rather than physical server layer.
The Best Storage For V Mware Environments Customer Presentation Jul201Michael Hudak
Server virtualization is being widely adopted throughout the industry. Server virtualization places new demands on the storage infrastructure that should be considered early in the design process. NetApp provides storage and data management solutions that uniquely enable effective server virtualization environments, and which further extend the benefits of server virtualization. In this presentation, we’ll review why NetApp is the best storage solution for virtualized server environments.
Five things virtualization has changed in your dr planJosh Mazgelis
Are you still rolling with the changes? Virtualization has made a huge impact on the way we deploy our computer workloads, and with that it has also changed the ways in which we protect them. The business continuity plans in place for IT even just five years ago look very different than what many companies have in place today. Keeping on top of these changes will help you understand your recovery capabilities, and your limitations as well. Join us with our friends at Neverfail and make sure you're keeping your IT business continuity plans spicy and fresh!
Vizioncore Economical Disaster Recovery through Virtualization1CloudRoad.com
Virtualization enables more affordable disaster recovery for SMBs. Previously, having a duplicate backup site required duplicate expensive hardware and infrastructure. With virtualization, virtual machines can easily be copied to alternate backup sites for quick recovery in the event of failure. Testing backups is also simpler through virtual machine replication. Vizioncore provides virtualization solutions like vReplicator that optimize replication speeds and storage usage to enable cost-effective disaster recovery for SMBs through virtualization.
VMworld 2013: Implementing a Holistic BC/DR Strategy with VMware - Part TwoVMworld
VMworld 2013
Jeff Hunter, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Ken Werneburg, VMware
How to achieve better backup with SymantecArrow ECS UK
Symantec provides holistic data protection solutions to address common customer challenges with backup and recovery, including:
1) Disparate backup solutions that add complexity and cost as data grows in volume and organizations virtualize.
2) Struggling to meet backup windows and service level agreements as data increases in size.
3) Looking for ways to reduce cost, complexity, and risk across their backup and recovery environment.
Symantec's portfolio includes NetBackup for large enterprises and Backup Exec for small and medium businesses, both utilizing shared deduplication and virtualization technologies. Symantec also offers appliances and cloud options for simplified backup and disaster recovery.
VMworld 2013: DRS: New Features, Best Practices and Future Directions VMworld
The document discusses new features and future directions for VMware Distributed Resource Scheduler (DRS). Key points include:
1) DRS 5.5 introduces features like automatically tuning the number of VMs per host and better handling of latency-sensitive and CPU-intensive workloads.
2) DRS is integrated with new storage technologies like VMware vFlash and vSAN. It also supports autoscaling of proxy switch ports.
3) Future areas of focus include network DRS with bandwidth reservations, more accurate static VM overhead memory estimation, and proactive DRS monitoring for potential issues.
This document summarizes a presentation about architecting a virtual infrastructure. The presentation covers design decisions, real world examples, potential pitfalls, and taking an interactive approach. The agenda includes an introduction, overview of design patterns, and a question and answer section. Key aspects of the virtual infrastructure design process are discussed, such as gathering requirements, vision, architecture, transition planning, and change management. Design patterns around sizing, scaling, hosts, networking, storage, and virtual constructs are also covered.
This document discusses the history and development of the Xen hypervisor project. It provides an overview of how paravirtualization and hardware-assisted virtualization have improved performance. It also examines how virtualization benefits security through policy enforcement and workload isolation. Network and memory management virtualization techniques are described that improve performance for virtual machines.
This document summarizes the benefits and implementation of virtual server design at DHMC. It discusses how virtual servers provide better hardware utilization, reduced physical infrastructure needs, and cost savings compared to traditional physical servers. Specific servers proposed for virtualization are listed. Backup strategies and the costs savings from virtualizing are also summarized.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
5 Ways Your Backup Design Can Impact Virtualized Data ProtectionStorage Switzerland
Virtualization specific backup applications, like Veeam, are the fastest growing segment of the data protection market, and for good reason. They promise to provide better, faster and more accurate data protection, while almost eliminating application recovery times. But the challenge is, your backup architecture can actually render many of the value-added features of VM specific backup applications totally useless. In this webinar, join Storage Switzerland's founder George Crump and ExaGrid's Kevin Russell, VP of North America Systems Engineers, for an interactive discussion of what these challenges are and more importantly how to solve them.
Veeam webinar - Deduplication best practicesJoep Piscaer
This document discusses best practices for using data deduplication with Veeam Backup & Replication 6.5 and Windows Server 2012. It recommends using data deduplication for backups with long retention periods of over 60 days to reduce storage costs. It provides guidance on planning and configuring deduplication, including sizing estimates, optimizing the backup repository, using forward incremental backups, and enabling inline and compression deduplication. It also demonstrates how Windows Server 2012 provides global deduplication across backup jobs and volumes.
The document describes an IT solution blueprint for building efficient disaster recovery (DR) solutions using a cookie cutter approach. It outlines a DR solution built on VMware, NetBackup, and NEC servers/storage using virtualization. The solution is designed to meet tight RPO and RTO requirements of 5 minutes. It demonstrates failover and failback workflows to move operations from a production site to DR site and back. The blueprint approach aims to reuse proven architectural principles and building blocks to deliver more sophisticated, reliable solutions cost effectively.
Backup Exec Blueprints: How to Use
Getting the most out of Backup Exec blueprints
These Blueprints are designed to show customer challenges and how Backup
Exec solves these challenges.
• Each Blueprint consists of:
‒ Pain Points: What challenges customers face
‒ Whiteboard: Shows how Backup Exec solves the customer challenges
‒ Recommended Configuration: Shows recommended installation
‒ Dos: Gives detailed configurations suggested by Symantec
‒ Don'ts: What configurations & pitfalls customers should avoid
‒ Advantages: Summarizes the Backup Exec advantages
• Use these Blueprints to:
‒ Understand the customer challenges and how Backup Exec solves them
‒ Present the Backup Exec best practice solution
Virtualizing Tier One Applications - VarrowAndrew Miller
This document provides best practices for virtualizing mission critical applications like Exchange and SQL Server. It discusses the top 10 myths about virtualizing business critical applications and provides the truths. It then discusses best practices for virtualizing Exchange, including starting simple, licensing, storage configuration, and high availability options. For SQL Server, it covers starting simple, licensing, storage configuration, migrating, and database best practices. It also discusses tools that can be used for database performance analysis when virtualized like Confio IgniteVM and vCenter Operations.
VMworld 2013: DR to The Cloud with VMware Site Recovery Manager and Rackspace...VMworld
VMware Site Recovery Manager (SRM) and Rackspace disaster recovery planning services provide a simple and reliable way to replicate virtual machines and applications to the cloud for disaster recovery. SRM automates replication and recovery, replacing complex manual runbooks. It supports options like vSphere Replication and storage-based replication. Rackspace offers SRM as a service with array-based replication and helps customers test recovery plans. Using these tools and services provides lower-cost disaster recovery than traditional approaches and ensures applications can be recovered reliably.
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
Symantec continues to deliver on its information management strategy to enable organizations to protect their information completely, deduplicate everywhere to eliminate redundant data, delete confidently and discover efficiently with Enterprise Vault 9.0, Enterprise Vault Discovery Collector, NetBackup 5000 and the NetBackup Cloud Storage for Nirvanix.
Double-Take Software provides workload optimization solutions including disaster recovery, high availability, server migration, and management. It has over 19,000 customers including half of the Fortune 500. Solutions are hardware and storage agnostic and support migrations between physical and virtual environments with minimal downtime. Real-time replication allows migrations to complete within minutes.
Flexibility In The Remote Branch Office VMware Mini Forum CalgaryJames Charter
VMware Mini Forum Calgary Afternoon Keynote Presentation, February 18, 2010. Overview on how Virtualization Technologies can provide flexibility and additional value in the Remote Office / Branch Office (ROBO). Topics discussed: Centralized vs. Distributed Deployment Models, Backup, Data Replication, Disaster Recovery, vSphere features, Site Recovery Manager, Virtual Desktop, WAN Acceleration.
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Double-Take for Migrations - thinkASG University SeriesthinkASG
Presentation given on Jan 13, 2015 by Greg Ross from Vision Solutions as part of the thinkASG Extending your Data Center with Windows Server series.
The presentation focused on using Double-Take to seamlessly migrate workloads from on-premise to public cloud, from public cloud to another public cloud, and even from public cloud back to on-premise.
Greg takes us through how to move from VMware to Hyper-V and also from Hyper-V to VMware, depending on your specific requirement.
Originally from thinkASG "Extending your Data Center with Windows Server"
http://www.thinkasg.com/about-us/events/extending-your-data-center-with-windows-server/
This document summarizes a presentation on understanding virtualization's role in auditing and security. It begins with introducing the speaker, Greg Shields, and his background and expertise in virtualization. It then discusses some key points about virtualization including what it is, what it does by virtualizing computer resources like memory, processors, network cards and disks, and some of the problems it can help solve like disaster recovery and server consolidation. It also discusses the seven elements of a successful virtualization architecture including recognizing hype, doing an assessment of your environment, purchase and implementation, physical to virtual conversions, high availability, backups, virtualizing desktops, and disaster recovery implementation.
The document discusses strategies for constructing and administering VMware vSphere environments. It notes that 44% of virtualization deployments fail due to issues like lack of ROI quantification and training. 55% of organizations experience more problems than benefits with virtualization due to issues like lack of visibility, tools, and education. The document advocates becoming an "ESXpert" to elevate your experience with virtualization and avoid common pitfalls. It outlines six typical steps in a virtualization implementation including environment assessment, constructing virtualization, backups expansion, virtualization to private cloud, virtualization at the desktop, and DR implementation.
How to achieve better backup with SymantecArrow ECS UK
Symantec provides holistic data protection solutions to address common customer challenges with backup and recovery, including:
1) Disparate backup solutions that add complexity and cost as data grows in volume and organizations virtualize.
2) Struggling to meet backup windows and service level agreements as data increases in size.
3) Looking for ways to reduce cost, complexity, and risk across their backup and recovery environment.
Symantec's portfolio includes NetBackup for large enterprises and Backup Exec for small and medium businesses, both utilizing shared deduplication and virtualization technologies. Symantec also offers appliances and cloud options for simplified backup and disaster recovery.
VMworld 2013: DRS: New Features, Best Practices and Future Directions VMworld
The document discusses new features and future directions for VMware Distributed Resource Scheduler (DRS). Key points include:
1) DRS 5.5 introduces features like automatically tuning the number of VMs per host and better handling of latency-sensitive and CPU-intensive workloads.
2) DRS is integrated with new storage technologies like VMware vFlash and vSAN. It also supports autoscaling of proxy switch ports.
3) Future areas of focus include network DRS with bandwidth reservations, more accurate static VM overhead memory estimation, and proactive DRS monitoring for potential issues.
This document summarizes a presentation about architecting a virtual infrastructure. The presentation covers design decisions, real world examples, potential pitfalls, and taking an interactive approach. The agenda includes an introduction, overview of design patterns, and a question and answer section. Key aspects of the virtual infrastructure design process are discussed, such as gathering requirements, vision, architecture, transition planning, and change management. Design patterns around sizing, scaling, hosts, networking, storage, and virtual constructs are also covered.
This document discusses the history and development of the Xen hypervisor project. It provides an overview of how paravirtualization and hardware-assisted virtualization have improved performance. It also examines how virtualization benefits security through policy enforcement and workload isolation. Network and memory management virtualization techniques are described that improve performance for virtual machines.
This document summarizes the benefits and implementation of virtual server design at DHMC. It discusses how virtual servers provide better hardware utilization, reduced physical infrastructure needs, and cost savings compared to traditional physical servers. Specific servers proposed for virtualization are listed. Backup strategies and the costs savings from virtualizing are also summarized.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
5 Ways Your Backup Design Can Impact Virtualized Data ProtectionStorage Switzerland
Virtualization specific backup applications, like Veeam, are the fastest growing segment of the data protection market, and for good reason. They promise to provide better, faster and more accurate data protection, while almost eliminating application recovery times. But the challenge is, your backup architecture can actually render many of the value-added features of VM specific backup applications totally useless. In this webinar, join Storage Switzerland's founder George Crump and ExaGrid's Kevin Russell, VP of North America Systems Engineers, for an interactive discussion of what these challenges are and more importantly how to solve them.
Veeam webinar - Deduplication best practicesJoep Piscaer
This document discusses best practices for using data deduplication with Veeam Backup & Replication 6.5 and Windows Server 2012. It recommends using data deduplication for backups with long retention periods of over 60 days to reduce storage costs. It provides guidance on planning and configuring deduplication, including sizing estimates, optimizing the backup repository, using forward incremental backups, and enabling inline and compression deduplication. It also demonstrates how Windows Server 2012 provides global deduplication across backup jobs and volumes.
The document describes an IT solution blueprint for building efficient disaster recovery (DR) solutions using a cookie cutter approach. It outlines a DR solution built on VMware, NetBackup, and NEC servers/storage using virtualization. The solution is designed to meet tight RPO and RTO requirements of 5 minutes. It demonstrates failover and failback workflows to move operations from a production site to DR site and back. The blueprint approach aims to reuse proven architectural principles and building blocks to deliver more sophisticated, reliable solutions cost effectively.
Backup Exec Blueprints: How to Use
Getting the most out of Backup Exec blueprints
These Blueprints are designed to show customer challenges and how Backup
Exec solves these challenges.
• Each Blueprint consists of:
‒ Pain Points: What challenges customers face
‒ Whiteboard: Shows how Backup Exec solves the customer challenges
‒ Recommended Configuration: Shows recommended installation
‒ Dos: Gives detailed configurations suggested by Symantec
‒ Don'ts: What configurations & pitfalls customers should avoid
‒ Advantages: Summarizes the Backup Exec advantages
• Use these Blueprints to:
‒ Understand the customer challenges and how Backup Exec solves them
‒ Present the Backup Exec best practice solution
Virtualizing Tier One Applications - VarrowAndrew Miller
This document provides best practices for virtualizing mission critical applications like Exchange and SQL Server. It discusses the top 10 myths about virtualizing business critical applications and provides the truths. It then discusses best practices for virtualizing Exchange, including starting simple, licensing, storage configuration, and high availability options. For SQL Server, it covers starting simple, licensing, storage configuration, migrating, and database best practices. It also discusses tools that can be used for database performance analysis when virtualized like Confio IgniteVM and vCenter Operations.
VMworld 2013: DR to The Cloud with VMware Site Recovery Manager and Rackspace...VMworld
VMware Site Recovery Manager (SRM) and Rackspace disaster recovery planning services provide a simple and reliable way to replicate virtual machines and applications to the cloud for disaster recovery. SRM automates replication and recovery, replacing complex manual runbooks. It supports options like vSphere Replication and storage-based replication. Rackspace offers SRM as a service with array-based replication and helps customers test recovery plans. Using these tools and services provides lower-cost disaster recovery than traditional approaches and ensures applications can be recovered reliably.
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
Symantec continues to deliver on its information management strategy to enable organizations to protect their information completely, deduplicate everywhere to eliminate redundant data, delete confidently and discover efficiently with Enterprise Vault 9.0, Enterprise Vault Discovery Collector, NetBackup 5000 and the NetBackup Cloud Storage for Nirvanix.
Double-Take Software provides workload optimization solutions including disaster recovery, high availability, server migration, and management. It has over 19,000 customers including half of the Fortune 500. Solutions are hardware and storage agnostic and support migrations between physical and virtual environments with minimal downtime. Real-time replication allows migrations to complete within minutes.
Flexibility In The Remote Branch Office VMware Mini Forum CalgaryJames Charter
VMware Mini Forum Calgary Afternoon Keynote Presentation, February 18, 2010. Overview on how Virtualization Technologies can provide flexibility and additional value in the Remote Office / Branch Office (ROBO). Topics discussed: Centralized vs. Distributed Deployment Models, Backup, Data Replication, Disaster Recovery, vSphere features, Site Recovery Manager, Virtual Desktop, WAN Acceleration.
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Double-Take for Migrations - thinkASG University SeriesthinkASG
Presentation given on Jan 13, 2015 by Greg Ross from Vision Solutions as part of the thinkASG Extending your Data Center with Windows Server series.
The presentation focused on using Double-Take to seamlessly migrate workloads from on-premise to public cloud, from public cloud to another public cloud, and even from public cloud back to on-premise.
Greg takes us through how to move from VMware to Hyper-V and also from Hyper-V to VMware, depending on your specific requirement.
Originally from thinkASG "Extending your Data Center with Windows Server"
http://www.thinkasg.com/about-us/events/extending-your-data-center-with-windows-server/
This document summarizes a presentation on understanding virtualization's role in auditing and security. It begins with introducing the speaker, Greg Shields, and his background and expertise in virtualization. It then discusses some key points about virtualization including what it is, what it does by virtualizing computer resources like memory, processors, network cards and disks, and some of the problems it can help solve like disaster recovery and server consolidation. It also discusses the seven elements of a successful virtualization architecture including recognizing hype, doing an assessment of your environment, purchase and implementation, physical to virtual conversions, high availability, backups, virtualizing desktops, and disaster recovery implementation.
The document discusses strategies for constructing and administering VMware vSphere environments. It notes that 44% of virtualization deployments fail due to issues like lack of ROI quantification and training. 55% of organizations experience more problems than benefits with virtualization due to issues like lack of visibility, tools, and education. The document advocates becoming an "ESXpert" to elevate your experience with virtualization and avoid common pitfalls. It outlines six typical steps in a virtualization implementation including environment assessment, constructing virtualization, backups expansion, virtualization to private cloud, virtualization at the desktop, and DR implementation.
PHP – Faster And Cheaper. Scale Vertically with IBM iSam Hennessy
The only way to scale your PHP application is horizontally? If you believe that then you could be missing a huge opportunity. This talk will layout why scaling vertically with the Power System platform can be a superior alternative to a traditional LAMP stack. With simplify development, reduce operation costs and a true enterprise quality database
This document discusses virtualization and provides guidance on virtualizing servers. It covers:
- Reasons for virtualization like increased server utilization and efficiency
- Steps for planning virtualization including addressing organizational challenges
- Factors for identifying good candidates for virtualization like application vendor support
- Best practices for the virtualization process including establishing a baseline and testing
- Potential issues to watch out for called "gotchas" and hints to improve performance
- Case studies on how Allstate and Accenture benefitted from virtualization.
Best Practices For Virtualised Share Point T02 Brendan Law Nathan MercerFlamer
This document provides best practices for virtualizing SharePoint environments. It discusses why organizations virtualize, recommended hardware, licensing, storage considerations, supported virtualization technologies, guidelines for virtualizing different SharePoint roles like web servers and databases, backup strategies, and tools for managing virtual environments. The presentation emphasizes the importance of right-sizing virtual machines, using dedicated storage, and understanding how virtualization may impact different roles like databases.
VMworld 2013: Virtualization Rookie or Pro: Why vSphere is Your Best ChoiceVMworld
VMworld 2013
Eric Horschman, VMware
Jeff Margolese, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses challenges and opportunities in the hosting industry and how virtualization can help address them. It summarizes that the hosting industry provides website, email, and server hosting services. It faces challenges around space, power usage, and flexibility. Virtualization allows hosting providers to run multiple virtual machines on fewer physical servers, reducing costs and increasing flexibility. The document recommends virtualizing servers, storage, and networking wherever possible to provide a fault-tolerant and scalable cloud infrastructure. It also emphasizes the importance of management and automation tools for virtualized environments.
WebSphere App Server vs JBoss vs WebLogic vs Tomcat (InterConnect 2016)Roman Kharkovski
This document provides a competitive comparison of WebSphere Application Server (WAS) versus Tomcat, JBoss and WebLogic. It discusses each product's capabilities in areas like runtimes, API management, development tools, cloud support, and more. Gartner research is referenced showing IBM holds the #1 position in the middleware software market for the past 13 years. The document aims to help organizations choose the best application server for their needs.
PHD Virtual Image-based Backup for Citrix XenServerMark McHenry
This presentation shares information about PHD Virtual's Image-based backup for Citrix XenServer environments. This solution is a simple and cost effective alternative to those who are still wrestling with agents and writing scripts to perform backups.
This document provides an overview of implementing affordable disaster recovery with Hyper-V and multi-site clustering. It discusses what constitutes a disaster, the key components needed which are a storage mechanism, replication mechanism, and target servers/cluster. It also covers clustering history, what a cluster is, and the important concept of quorum which determines a cluster's existence through voting of its members.
This document discusses a new partnership between HPE and Zerto to offer disaster recovery capabilities for HPE Helion CloudSystem Enterprise customers using Zerto Virtual Replication (ZVR). ZVR allows customers to recover VMs and application data with recovery point objectives (RPO) as low as seconds and recovery time objectives (RTO) in minutes. The solution provides application-consistent recovery across heterogeneous environments with different hypervisors and storage solutions. The partnership will provide customers an alternative to VMware SRM and enable DR between on-prem and cloud environments with automation and orchestration.
VMware End-User-Computing Best Practices PosterVMware Academy
This document provides best practices for configuring and managing various VMware Horizon and related products in a virtual desktop infrastructure (VDI) environment. It includes recommendations for installing and updating agents in the proper order, sizing infrastructure components appropriately based on the number of users and sessions, optimizing master images, balancing performance and cost considerations, and leveraging tools like App Volumes and User Environment Manager to improve management and end user experience. The document emphasizes the importance of testing, monitoring, and following established norms and limits to ensure a reliable and scalable VDI deployment.
Virtual SAN: It’s a SAN, it’s Virtual, but what is it really?DataCore Software
What do you think of when you hear the words “Virtual SAN”? For some, it may mean addressing application latency and infrastructure costs through consolidation. For others, it may be addressing potential single point of failures. Regardless of the use case, Virtual SANs are becoming one of the hottest software-defined storage solutions for IT organizations to maximize storage resources, lower overall TCO, and increase availability of critical applications and data.
This presentation introduces the concept of Virtual SAN and does a technical deep dive on the most common use cases and deployment models involved with a DataCore Virtual SAN solution.
Hypervisor-based VDI utilizes virtual machines running on hypervisors to provide desktop environments to users, while blade PCs allocate physical servers with each user having their own dedicated resources. The main differences are in performance, scalability, and cost - VDI has lower performance but higher density and flexibility, while blade PCs provide better performance through dedicated resources but have lower density and scalability. Administrative overhead and overall costs vary depending on the environment and needs of the organization.
PCI Pass-through - FreeBSD VM on Hyper-V (MeetBSD California 2016)iXsystems
The slides for Kylie Liang's presentation, “PCI Pass-through - FreeBSD VM on Hyper-V”, given at MeetBSD California 2016 in Berkeley, CA.
A recording of the talk can be viewed at: http://bit.ly/2hteton.
The document discusses eG Innovations' performance management monitoring solution. It provides an overview of eG and how it can monitor virtual desktop infrastructure (VDI) deployments. eG offers deep visibility into all layers of VDI, including the virtualization platform, connection broker, profile server, and individual user sessions. It monitors over 150 applications and infrastructure components to provide comprehensive performance monitoring of complex VDI environments.
09ntc Server Virtualization Session SlidesPeter Campbell
Slides from the 2009 Nonprofit Technology Conference Session on Server Virtualization. These slides introduce the main concepts and suggest approaches for small, medium and large business scenarios. the latter set of slides are by matt eshelman of CITIDC, who presented with me.
This document discusses storage considerations for VMware View environments. It begins with an introduction to storage systems and their history, then discusses planning storage needs for VMware View. Some key challenges with storage in virtual desktop environments are large amounts of centralized user data and "storms" of access that can impact performance. The document recommends addressing these through good sizing and performance assessment, optimizing desktop images, leveraging technologies appropriately, and using resources on optimizing for View.
The document discusses server virtualization with Microsoft Hyper-V and HP solutions. It provides an overview of virtualization benefits like consolidation and efficiency. It also covers virtual machine lifecycle management best practices, considerations for application virtualization, and how HP Insight Control complements Microsoft System Center solutions for virtualization management.
JVM Support for Multitenant Applications - Steve Poole (IBM)jaxLondonConference
Presented at JAX London 2013
Per-tenant resource management can help ensure that collocated tenants peacefully share computational resources based on individual quotas. This session begins with a comparison of deployment models (shared: hardware, OS, middleware, everything) to motivate the multitenant approach. The main topic is an exploration of experimental data isolation and resource management primitives in IBM’s JDK that combine to help make multitenant applications smaller and more predictable.
This document provides an overview of sample scripts for Windows Server Update Services (WSUS). It describes scripts that can remotely install the WSUS client, enumerate installed and missing patches on multiple computers, perform on-demand patching of multiple machines, and match security updates to their corresponding Microsoft advisory numbers. The scripts are offered without warranty and are intended to demonstrate what can be automated through scripting WSUS functionality.
This slide deck presentation on best practices for architecting and implementing Windows Server Update Services (WSUS) was used at a technology conference. The document provides an overview and outline of the presentation topics, which include WSUS architecture designs, implementation, troubleshooting tips, and a demonstration. Contact information is provided for the presenting company for additional information.
The document discusses the history and current state of virtualization technology. It covers major developments from the 1960s to present day, including the introduction of virtualization concepts, early vendors like VMware, the growth of open source solutions, and the emergence of cloud computing. The document also examines current adoption rates and trends, noting that virtualization is becoming standard across enterprise data centers but challenges remain for desktop virtualization and cloud adoption.
This document discusses how VDI-in-a-Box can be used to deliver applications and desktops in small business scenarios. It provides steps to set up a VDI-in-a-Box server with Remote Desktop Services, Hyper-V, and RemoteApp capabilities. Problem applications can be hosted on pooled desktops using RemoteApp for Hyper-V as a lighter-weight alternative to full virtual desktops. The document aims to help IT professionals right-size application delivery based on user needs.
This document is a slide deck presentation about converting scripts from VBScript to PowerShell. It discusses how PowerShell uses objects and pipelines instead of text and loops. It provides examples of writing modular, reusable functions and using PowerShell commands and techniques instead of those from VBScript. The presentation encourages attendees to download the materials and scripts from the company's website and consider attending future classes.
This document summarizes various command line tricks and tools for managing ESXi hosts, including Linux commands like find, grep, cat, and vi, as well as VMware-specific commands like esxtop, vmkfstools, vim-cmd, esxcli, esxupdate, and vm-support. It is divided into four parts that cover understanding the ESXi command line, Linux commands, VMware commands, and using the vMA and scripting. The document provides examples for using these commands to locate files, read logs, control services, get process and disk information, configure networking and storage, manage VMs, troubleshoot issues, and install updates.
This document contains slides from a presentation about supporting SQL Server. The presentation provides an overview of how SQL Server works, including how data is stored physically and accessed. It discusses backup strategies, indexing, query optimization, high availability options and basic SQL queries. The presenter provides their contact information and offers to share additional resources.
This document summarizes a presentation about building, deploying, and supporting Server Core in Windows Server 2008 R2. The presentation covers the benefits of Server Core, including a smaller footprint, fewer patches required, and greater stability. It also discusses some of the limitations of Server Core, such as limited GUI functionality and .NET framework support. The presentation provides guidance on installing and configuring Server Core, and recommends using remote management tools like PowerShell instead of direct console access for ongoing management.
This document discusses different ways to deploy RemoteApps using Remote Desktop Services (RDS), including RDP file distribution, RD Web Access, local desktop installation, and client extension re-association. It compares the pros and cons of each approach and how they enable users to access applications remotely in different ways.
This document discusses how to automatically and rapidly deploy software in a small environment. It covers the two main parts of the process: software packaging and software deployment. For packaging, it explains how to configure software installations to run silently without user input using techniques like installation switches, MSI properties, and diff tools. For deployment, it discusses options like GPSI, PSExec, and paid solutions to remotely install packaged software on machines.
This document is a slide deck presentation about Windows PowerShell scripting and modularization. The presentation covers topics such as starting with commands, moving to scripts, parameterizing scripts, encapsulating in functions, using dot-sourcing, building pipeline functions, adding help, building script modules, and making script cmdlets. The presentation provides examples and guidance for improving PowerShell scripts through modularization and best practices.
The document is a slide deck about PowerShell error handling and debugging. It discusses two types of bugs, techniques for debugging like using trace code, breakpoints, and the step debugger. It also covers error handling using try/catch blocks and setting error actions. The slide deck was presented at a conference by Concentrated Technology.
This PowerShell crash course for SharePoint administrators introduces PowerShell and demonstrates how to use it to manage SharePoint and other Microsoft products and services. The presentation covers PowerShell basics like running commands, piping, formatting output, remoting, and using WMI. It aims to help administrators learn PowerShell and show how it can simplify and automate administrative tasks. Attendees are encouraged to download the slides and materials from the presenter's website for reference.
The document discusses preparing software for automated deployment after upgrading to Windows 7. It covers two key aspects: repackaging software to install silently without user input, and deploying the repackaged software using a deployment tool. For repackaging, it describes analyzing the installation format (EXE, MSI, etc.), identifying any silent installation switches, and using tools like WinINSTALL LE to capture changes if switches cannot be found. It also discusses customizing software post-installation using registry changes packaged via these same tools.
This slide deck discusses remote computer management using PowerShell v2. It covers prerequisites, an overview of PowerShell remoting and underlying technologies like WinRM. Specific configuration steps are provided for domains, workgroups and individual machines. Troubleshooting tips and techniques for using remoting sessions and Invoke-Command are also summarized. The instructor encourages attendees to contact them for additional materials or questions.
This PowerShell crash course for SharePoint administrators introduces PowerShell and demonstrates how to use it to manage SharePoint and other Microsoft products and services. The presentation covers PowerShell basics like running commands, piping, formatting output, remoting, and using WMI. It aims to help administrators learn PowerShell and show how it can simplify and automate administrative tasks. Attendees are encouraged to download the materials from the presenter's website for reference.
This document is a slide deck presentation on Windows PowerShell given by Don Jones of Concentrated Technology. The presentation introduces PowerShell, covering topics like why it was created, running commands, piping, remoting, and more. It encourages attendees to download the transcript and scripts from the company's website for further reference.
This document contains a slide deck presentation about eight tips and tricks for using PowerShell. The presentation covers remote control using WinRM and PSRemoting, parameter binding, splatting, tracing commands, suppressing errors, making reusable tools, comment-based help, and creating GUI apps. The presentation encourages attendees to download the slides and scripts from the Concentrated Technology website.
This slide deck discusses customizing PowerShell output using calculated properties. It explains that calculated properties allow dynamically extending objects with custom columns. The hashtable syntax for defining a calculated property is shown, with the expression using $_ to access the object being piped. Examples are provided like calculating free disk space, performing secondary WMI queries, and formatting output for AD user creation. More advanced formatting options like alignment, width, and format strings are also covered.
This 75-minute PowerShell crash course presentation teaches key PowerShell usage patterns using real-world tasks as examples. It covers loading extensions and modules, cmdlet and parameter names, piping, formatting output as tables, manipulating objects, comparison operators, filtering with Where-Object, using WMI, batch cmdlets like Invoke-WmiMethod, and PowerShell scripting. The slide deck is available on the company's website and they offer additional training resources.
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/transforming-enterprise-intelligence-the-power-of-computer-vision-and-gen-ai-at-the-edge-with-openvino-a-presentation-from-intel/
Leila Sabeti, Americas AI Technical Sales Lead at Intel, presents the “Transforming Enterprise Intelligence: The Power of Computer Vision and Gen AI at the Edge with OpenVINO” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Sabeti focuses on the transformative impact of AI at the edge, highlighting the role of the OpenVINO tool kit in streamlining the AI solution life cycle on Intel hardware. This includes the development of energy-efficient computer vision and generative AI models suitable for edge computing.
Sabeti showcases cutting-edge AI applications, such as multimodal LLMs for document understanding and YOLO object detection for smart retail solutions. She addresses the entire edge compute ecosystem, discussing how to optimize AI processes from training to inference across various computing platforms, including Intel GPUs. Additionally, she explores how businesses can seamlessly transition between edge and cloud environments and how Intel’s portfolio of solutions unlock the advantages of edge computing, such as data protection and AI acceleration.
Blockchain and Cyber Defense Strategies in new genre timesanupriti
Explore robust defense strategies at the intersection of blockchain technology and cybersecurity. This presentation delves into proactive measures and innovative approaches to safeguarding blockchain networks against evolving cyber threats. Discover how secure blockchain implementations can enhance resilience, protect data integrity, and ensure trust in digital transactions. Gain insights into cutting-edge security protocols and best practices essential for mitigating risks in the blockchain ecosystem.
Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threatsanupriti
In the rapidly evolving landscape of blockchain technology, the advent of quantum computing poses unprecedented challenges to traditional cryptographic methods. As quantum computing capabilities advance, the vulnerabilities of current cryptographic standards become increasingly apparent.
This presentation, "Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threats," explores the intersection of blockchain technology and quantum computing. It delves into the urgent need for resilient cryptographic solutions that can withstand the computational power of quantum adversaries.
Key topics covered include:
An overview of quantum computing and its implications for blockchain security.
Current cryptographic standards and their vulnerabilities in the face of quantum threats.
Emerging post-quantum cryptographic algorithms and their applicability to blockchain systems.
Case studies and real-world implications of quantum-resistant blockchain implementations.
Strategies for integrating post-quantum cryptography into existing blockchain frameworks.
Join us as we navigate the complexities of securing blockchain networks in a quantum-enabled future. Gain insights into the latest advancements and best practices for safeguarding data integrity and privacy in the era of quantum threats.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
This slide deck is a deep dive the Salesforce latest release - Summer 24, by the famous Stephen Stanley. He has examined the release notes very carefully, and summarised them for the Wellington Salesforce user group, virtual meeting June 27 2024.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
this resume for sadika shaikh bca studentSadikaShaikh7
I am a dedicated BCA student with a strong foundation in web technologies, including PHP and MySQL. I have hands-on experience in Java and Python, and a solid understanding of data structures. My technical skills are complemented by my ability to learn quickly and adapt to new challenges in the ever-evolving field of computer science.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
Knowledge and Prompt Engineering Part 2 Focus on Prompt Design Approaches
Designing virtual infrastructure
1. Designing Your Virtual Infrastructure & Hypervisor Deep Dive Don Jones ConcentratedTech.com Pre-requisites for this presentation: 1) Strong understanding of basic virtualization concepts Level: Intermediate
3. About the Instructor Don Jones Contributing Editor, technetmagazine.com IT author, consultant, and speaker Co-founder of Concentrated Technology Seven-time recipient of Microsoft ’s Most Valuable Professional (MVP) Award Author and Editor-in-Chief for Realtime Publishers Trainer for www.CBTNuggets.com
4. 44% of Virtualization Deployments Fail According to a CA announcement from 2007. Inability to quantify ROI Insufficient administrator training Expectations not aligned with results Success = Measure performance Diligent inventory Load Distribution Thorough Investigation of Technology
5. 55% Experience More Problems than Benefits with Virtualization According to an Interop survey in May, 2009. Lack of visibility Lack of tools to troubleshoot performance problems Insufficient education on virtual infrastructure software Statistics: 27% could not visualize / manage performance 25% cite training shortfalls 21% unable to secure the infrastructure 50% say that implementation costs are too high
6. Lifecycle of a Virtualization Implementation Step -1: Hype Recognition & Education Step 0: Assessment Step 1: Purchase & Implementation Step 2: P2V Step 3: High Availability Step 4: Backups Expansion Step 5: Virtualization at the Desktop Step 6: DR Implementation
8. The Virtualization Assessment Successful rollouts need a virtualization assessment. You must analyze your environment before you act. Virtualization assessment should include: Inventory of servers Inventory of attached peripherals Performance characteristics of servers Analysis of performance characteristics Analysis of hardware needs to support virtualized servers Backups Analysis Disaster Recovery Analysis (Hot vs. warm vs. cold) Initial virtual resource assignment suggestions
9. Easy Candidates for Virtualization Low processor utilization Low memory requirements We too often add too much RAM in a server. Low context switches Infrastructure servers Redundant or warm-spare servers Occasional- or limited-use servers Systems where many partially-trusted people need console access
10. Not Candidates for Virtualization High and constant processor / memory utilization High context switches Attached peripherals Serial / parallel / USB / External SCSI / License Keyfobs / Scanners / Bar Code Readers Very high network use Gigabit networking requirements Specialized hardware requirements Hardware appliances / Pre-built / Unique configs Terminal Servers! … at least with today ’s technology…
11. Performance is Job One In the early days of virtualization, we used to say… “ Exchange Servers can’t be virtualized” “ Terminal Servers can’t be virtualized” “ You’ll never virtualize a SQL box” Today ’s common knowledge is that the decision relates entirely to performance . Thus, before you can determine which servers to virtualize you must understand their performance. Measure that performance over time. Compile results into reports and look for deviations from nominal activity.
12. Useful Performance Counters Category Performance Metric Example Threshold Disk % Disk Time > 50% Memory Available MBytes Below Baseline Memory Pages / Sec > 20 Page File % Usage > 70% Physical Disk Current Disk Queue Length >18 Processor % Processor Time > 40% System Processor Queue Length > 5.4 System Context Switches / Sec > 5000 System Threads > 2000
13. Useful Performance Counters Category Performance Metric Example Threshold Disk % Disk Time > 50% Memory Available MBytes Below Baseline Memory Pages / Sec > 20 Page File % Usage > 70% Physical Disk Current Disk Queue Length >18 Processor % Processor Time > 40% System Processor Queue Length > 5.4 System Context Switches / Sec > 5000 System Threads > 2000
17. Assessing the Right vRAM We put too much RAM into our physical servers! Initial RAM is cheap Adding RAM can be costly As a consequence, we ’re accustomed to effectively unlimited RAM supply OS & applications rarely RAM-bound Who has 4G of RAM in your DCs? And NEED it?? Be honest!
18. Assessing the Right vRAM Not so with virtual machines! RAM conservation critical to consolidation ratio Excess RAM in one VM means no RAM for another This is particularly an issue with Hyper-V No page table sharing Assigned VM RAM = Reserved physical RAM So, how do you measure the right level of RAM? Basically, you subtract.
19. Assessing the Right vRAM 2G of on-board RAM … minus… .5G of available RAM Let ’s consider a physical machine with 2G of on-board RAM … equals… Initial assignment of 1.5G of vRAM
20. Gathering Performance PerfMon is the only mechanism that can gather these statistics from servers. But PerfMon is ridiculously challenging to use. Other products assist... Microsoft Assessment & Planning Solution Accelerator VMware Consolidation & Capacity Planner Platespin PowerRecon CiRBA
22. Consolidation = Cost Savings 8:1 15:1 20:1 Small Server $6,000 1:1 $6,000 per Server Large Server $15,000 Virtualization $5,000 $20,000 Large Marginal Cost Increases per Additional Server $2,500 per Server Smaller Marginal Cost Increases + Power + Cooling + Provisioning Labor $1,333 $1,000
23. Three Types of Virtualization Entire System Virtualization VMware Microsoft Virtual Server OS Virtualization Parallels Virtuozzo Paravirtualization Microsoft Hyper-V Xen / Citrix XenSource Virtual O/S is entire system. No awareness of underlying host system. OS instances are “deltas” of the host configuration. Similar to Hardware Virtualization, but Virtual O/S is “aware” it is virtualized.
24. Hardware Virtualization ESX / vSphere Hybrid hypervisor and host OS Device drivers in the hypervisor Emulation (translation from emulated driver to real driver) High cost, high availability, high performance
25. Paravirtualization Hyper-V, Citrix XenSource Host OS becomes primary partition above hypervisor. Device drivers in the primary partition Paravirtualization (no emulation for “enlightened” VMs) Low cost, moderate-to-high availability, high performance
26. Hardware Virtualization Microsoft Virtual Server Hypervisor above host OS. Installed to host OS. Device drivers in hypervisor Emulation (translation from emulated driver to real driver) Low cost, low availability, low performance
27. OS Virtualization Parallels Virtuozzo Each VM is comprised of the host config + deltas. No traditional hypervisor. V-layer processes requests. All real device drivers hosted on host OS Moderate cost, moderate availability, very high perf.
28. CAUTION! Differences between major hypervisors (vSphere, Hyper-V, Xen) are vastly overrated Everything one vendor says is an “advantage” is what the competitors trash as “bad design.” Either (a) get all the facts or (b) buy mainly on price This is no place for a religious jihad – focus on business needs, not technical minutae
29. Example VMWare ’s constant harping on “smaller footprint” – which is flawed and frankly ridiculous. Is anyone hurting for OS disk space out there? Also, numerous myths and overstatements about specific hypervisor implementations, etc. Most of these products are basically the same in terms of business-level performance and features. Main difference is cost.
31. P2V Isn ’t Sexy Any More After environment stand-up, P2V process converts physical machines to virtual ones. A “ghost” + a “driver injection” Numerous applications can do this in one step. SCVMM, Converter, 3 rd Parties These days, P2V process is commodity. Everyone has their own version. Some are faster. Some much slower.Paid options == faster.
32. P2V, P2V-DR P2V Physical 2 Virtual machine conversion A tool as well as a process SCVMM, VMware VI/Converter, Acronis, Leostream, others. P2V-DR Similar to P2V, but with interim step of image creation/storage. “ Poor-man’s DR”
33. P2V-DR Uses P2V-DR can be leveraged for medium-term storage of server images Useful when DR site does not have hot backup capability or requirements Regularly create images of physical servers, but only store those images rather than load to virtual environment Cheaper-to-maintain DR environment Not fast. Not easy. Not completely reliable. … but essentially cost-free.
35. Costs vs. Benefits High-availability adds dramatically greater uptime for virtual machines. Protection against host failures Protection against resource overuse Protection against scheduled/unscheduled downtime High-availability also adds much greater cost… Shared storage between hosts Connectivity Higher (and more expensive) software editions Not every environment needs HA!
36. What Really is Live Migration? Part 1: Protection from Host Failures
37. What Really is Live Migration? Part 2: Load Balancing of VM/host Resources
38. Comparing Quick Migration w/ Live Migration Simply put: Migration speed is the difference. In Hyper-V ’s original release, a Hyper-V virtual machine could be relocated with “a minimum” of downtime. This downtime was directly related to.. … the amount of memory assigned to the virtual machine … the connection speed between virtual hosts and shared storage. Virtual machines with greater levels of assigned virtual memory and slow networks would take longer to complete a migration from one host to another. Those with less could complete the migration in a smaller amount of time. With QM, a VM with 2G of vRAM could take 32 seconds or longer to migrate! Downtime ensues…
39. Comparing Quick Migration w/ Live Migration Down/dirty details… During a Quick Migration, the virtual machine is immediately put into a “Saved” state. This state is not a power down, nor is it the same as the Paused state. In the saved state – and unlike pausing – the virtual machine releases its memory reservation on the host machine and stores the contents of its memory pages to disk. Once this has completed, the target host can take over the ownership of the virtual machine and bring it back to operations.
40. Comparing Quick Migration w/ Live Migration Down/dirty details… This saving of virtual machine state consumes most of the time involved with a Quick Migration. Needed to reduce this time delay was a mechanism to pre-copy the virtual machine ’s memory from source to target host. At the same moment the pre-copy would to log changes to memory pages that occur during the period of the copy. These changes tend to be relatively small in quantity, making the delta copy significantly smaller and faster than the original copy. Once the initial copy has completed, Live Migration then… … pauses the virtual machine … copies the memory deltas … transfers ownership to the target host. Much faster. Effectively “zero” downtime.
41. Common Features in High-End Platforms Live migration enables running virtual machines to be moved to an alternate host before a host failure. Automated relocation to new hardware and restart of virtual machines immediately upon a host failure. Load balancing calculations that manually or automatically re-balance running virtual machines across hosts to prevent resource contention. Disk storage migration that enables the zero-impact relocation of virtual machine disk files to alternate storage. Automated replication features that copy backed up virtual machines to alternate locations for disaster recovery purposes.
43. Backup Terminology File-Level Backup Backup Agent in the Virtual Machine Block-Level Backup Backup Agent on the Virtual Host Quiescing Quieting the file system to prep for a backup O/S Crash Consistency Capability for post-restore O/S functionality Application Crash Consistency Capability for post-restore application functionality
44. Four Types of Backups Backing up the host system May be necessary to maintain host configuration But often, not completely necessary The fastest fix for a broken host is often a complete rebuild Backing up Virtual Disk Files Fast and can be done from a single host-based backup client Challenging to do file-level restore Backing up VMs from inside the VM Slower and requires backup clients in every VM. Resource intensive on host Capable of doing file-level restores Back up VMs from the storage perspective. Leverage storage frame utilities to complete the backup.
46. The Problem with Transactional Databases O/S Crash Consistency is easy to obtain. Just quiesce the file system before beginning the backup. Application Crash Consistency much harder. Transactional databases like AD, Exchange, SQL don ’t quiesce when the file system does. Need to stop these databases before quiescing. Or, need an agent in the VM that handles DB quiescing. Restoration without crash consistency will lose data. DB restores into “inconsistent” state.
47. The Problem with Transactional Databases For VMs, must consider file-level backups and block-level backups. “ Top-down” vs. “Bottom-up” File-level backups provide individual file restorability File-level backups provide transactional database crash consistency. Block-level backups provide whole-server restorability. Not all block-level backups provide app crash consistency. Windows VSS can quiesce apps prior to snapping a backup. Advantage: Hyper-V!
49. Desktop Virtualization = VDI = Hosted Desktops Once you fully embrace virtualization for your servers, desktop are a next common focus. VDI is all about the apps. HOWEVER, BEWARE VDI! VDI is a much more complex beast than Terminal Services, Citrix XenApp, or other presentation virtualization platforms. It is also dramatically more expensive. VDI ’s Use Cases (and there are only two) Applications that simply don ’t work atop TS/Citrix High-utilization apps that require remote access
51. Disaster Recovery Don ’t forget that your DR infrastructure will have to change drastically Big, complex topic – suitable for a whole session all by itself!
52. Thank You! Please feel free to pick up a card if you ’d like copies of my session materials I ’ll be happy to take any last questions while I pack up Please complete and submit an evaluation form for this and every session you attend!