The document provides information about virtual machine extensions (VMX) on Juniper Networks routers. It discusses hardware virtualization concepts including guest virtual machines running on a host machine. It then describes the different types of virtualization including fully virtualized, para-virtualized, and hardware-assisted. The rest of the document goes into details about the VMX product, architecture, forwarding model, and performance considerations for different use cases.
PLNOG16: Kreowanie usług przez operatorów – SP IWAN, Krzysztof KonkowskiPROIDEA
The document discusses an SP-IWAN (Service Provider Intelligent WAN) architecture that can be offered by network operators. It proposes separating the transport and service layers, using DMVPN as an overlay and allowing applications to flow freely between MPLS and internet links using PfR. It also discusses using virtual network functions and orchestration to automate service provisioning and deliver application-aware services like monitoring, optimization and security. The architecture is meant to help operators deliver new cloud services, optimize application performance across networks and generate new revenue streams.
Juniper Networks provides a data center solution consisting of Juniper switches, security devices, and Contrail SDN software. The solution addresses challenges of scale and automation needed to build future-proof clouds and data centers. Key aspects of the solution include Juniper's portfolio of data center switches like the QFX10000 line, partnerships with other vendors, and proven reference designs. Juniper helps customers address these challenges and create valuable cloud services.
This document provides an overview and summary of Cisco's Data Center networking and storage solutions, with a focus on the new Cisco MDS 9710 Director. Some key points:
- Cisco offers a multi-protocol portfolio including Fibre Channel, FCoE, and IP networking solutions to address growing data and connectivity demands in modern data centers.
- The Cisco MDS 9710 is the newest storage director that provides the highest scalability, availability, and investment protection in the industry for large scale data centers.
- It supports up to 384 line-rate 16Gbps Fibre Channel ports or 48-port 10GbE FCoE modules in a single chassis. This provides 3 times the performance of competing
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
This document provides an overview of running OpenStack and OpenContrail on the FreeBSD platform. It first discusses OpenStack components like Nova compute and network services. It then covers using OpenContrail for network virtualization, which provides overlay networking as an alternative to VLANs. This allows migration of virtual machines between physical servers while maintaining network isolation. The status of FreeBSD support for OpenStack compute and networking services is also summarized.
The document provides information on Juniper SRX platform updates, including:
1) vSRX updates - The virtual firewall platform now supports up to 80G FW throughput on a single server and 100G vSRX was announced. Support for VMware 5.5+SRIOV and features parity with physical SRX firewalls.
2) Physical SRX updates - New SRX3xx and SRX550 series for branches up to 500 users. The SRX1500 provides high performance networking and security for enterprise edge and data center edge. The SRX5400 supports advanced software security services.
3) Software updates - Sky ATP cloud-based malware analysis and SRX User Identity REST API.
The document provides troubleshooting tips and techniques for Cisco Data center switches including the Cisco Nexus 7000, Catalyst 6500 VSS, and high CPU utilization issues. It discusses using commands like show processes cpu sorted, debug netdr capture, and show ip cef to troubleshoot traffic flow and switching paths. It also covers troubleshooting software upgrades on the Nexus 7000 and gathering core dumps and logs to debug process crashes.
This document discusses use cases and requirements for different cloud customer segments using Contrail. It describes Contrail's ability to enable IT as a service, enterprise migration to the cloud with legacy interconnects, public cloud services, and IoT/M2M use cases. It provides an overview of how Contrail works including its components, scale out architecture, and interaction with OpenStack. It also summarizes Contrail's features such as routing, security, analytics, and gateway services.
Cloud Network Virtualization with Juniper Contrailbuildacloud
Description: Contrail Technology will be discussed covering architecture, capabilities and use cases. It will be followed by a demonstration on current Contrail implementation on CloudStack/Openstack.
Parantap works as a Sr. Director of Solutions Engineering for Contrail Product within Juniper. Before Juniper, Parantap led the network architecture team for Microsoft Online Services (Windows Azure, MS Bing). Prior to Microsoft, Parantap worked as a core engineering manager for UUNet Technologies building Internet backbones.
SR-IOV ixgbe Driver Limitations and ImprovementLF Events
SR-IOV is a device virtualization technology, it’s mainly used for improving network performance of virtual machines. However, SR-IOV has some limitations which come from hardware and/or driver implementation. For certain use case, such as Network Function Virtualization(NFV), those limitations are critical to provide services. Intel 10Gb NIC, Niantic(82599), has such limitations(e.g. VLAN filtering, multicast promiscuous) for NFV use case.
This presentation will show the limitations and issues and how they are being addressed, then explain how implements VF multicast promiscuous mode support in ixgbe driver and VF trust, iproute2 functionality enhancement.
This presentation was delivered at LinuxCon Japan 2016 by Hiroshi Shimamoto
Slawomir Janukowicz, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
Devconf2017 - Can VMs networking benefit from DPDKMaxime Coquelin
DPDK brings high-performance/low-latency virtualization networking capabilities thanks to its Vhost/Virtio support. The session will first introduce DPDK and its Vhost/Virtio implementations, exposing to the audience examples of possible uses, and challenges that need to be addressed to achieve high-performance, functionality and reliability. Then, Vhost/Virtio improvements introduced in last DPDK release will be covered, such as receive path optimizations, Virtio's indirect descriptors support, or transmit zero copy to name a few. The speakers will explain which problems they aim to address, how they address them, mentioning their limitations.
Finally, the speakers, who are active DPDK's Virtio/Vhost contributors, will expose what new developments are in the pipe to tackle the remaining challenges.
The session will be presented so that DPDK developers and users find useful information on current developments and status. People not familiar with DPDK may find a overview, get and share ideas with other projects.
VMware expert Motonori Shindo presented on L2 over L3 encapsulation protocols like VXLAN, NVGRE, STT, and Geneve. He explained how each protocol works including header formats and provided ecosystem updates. He believes Geneve has potential as it allows for extensibility through options fields while leveraging NIC offloading, but that VXLAN is already widely adopted. Critics argue its goals could be achieved through other means.
Virtualization Forum 2015, Praha, 7.10.2015
sál Juniper Networks
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf.
Nicolai van der Smagt has been in the business of designing, implementing and running SP networks for over 15 years. He has worked with DOCSIS, DSL and FTTH operators. Nowadays, Nicolai is helping Infradata’s pan-European customers build better access, aggregation and core networks, but his focus is on the data center, SDN, NFV and the whitebox switching revolution. His motto: “Simplicity is sophistication”.
Topic of Presentation: SDN
Language: English
Abstract:
Open source SDN that actually works -today
OpenContrail is an open source (Apache 2.0 licensed) project that provides network virtualization in the data center, using tried and tested open standards. It provides northbound APIs, integrates in Openstack or Cloudstack and is available today!
In this slot we’ll show you the architecture and ideas behind the technology and how OpenContrail enables you to avoid the pitfalls that other (closed) SDN solutions bring. If time permits we’ll also demo the technology.
Juniper Networks: Virtual Chassis High AvailabilityJuniper Networks
This presentation shares the findings of the second installment of a recent Juniper Networks commissioned Network Test to evaluate its Virtual Chassis technology in Juniper EX8200 modular and Juniper EX4200/EX4500/EX4550 fixed-configuration switches.
In this second installment of a two-part project, the focus is on the reliability and resiliency of Virtual Chassis technology. Part I of this project focused on Virtual Chassis performance and scalability: http://juni.pr/13Zi1Sp. Visit http://juni.pr/dacenSS
to learn more about Juniper’s Data Center solutions.
Open Ethernet: an open-source approach to modern network designAlexander Petrovskiy
The era of closed proprietary hardware platforms is coming to an end. Today, in the world of Web-scale IT, the industry is starting to adopt new approach, based on the principles of openness, scalabilty and customizability. However, in more conservative networking industry, traditional equipment and proprietary technologies from a single vendor are often being used, which limits the flexibility, prevents innovation and narrows down the choice.
The "Open Ethernet" initiative from Mellanox brings open source principles into the world of modern networking and allows customers to select the best hardware and software to design network infrastructure, based on open and standard protocols and technologies, also opening the way for broad adoption of SDN.
The document discusses NSX design and deployment considerations including:
1. Physical and logical infrastructure requirements for NSX including IP connectivity and MTU size.
2. Edge cluster design with options for collapsed or separated edge and infrastructure racks.
3. NSX manager and controller placement and sizing within management clusters.
4. Transport zone, VTEP, and VXLAN switching concepts which are fundamental to the NSX overlay architecture.
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
DPDK Summit 2015 - RIFT.io - Tim MortsolfJim St. Leger
DPDK Summit 2015 in San Francisco.
Presentation by RIFT.io's CTO Tim Mortsolf.
For additional details and the video recording please visit www.dpdksummit.com.
The document provides information about Brocade SAN switches including their product lines, features, and specifications. It discusses various switch models ranging from 8-port to 384-port configurations supporting 1, 2, 4, 8, and 10Gbps speeds. Features covered include dynamic path selection, ISL trunking, extended fabric, hardware-enforced zoning, advanced performance monitoring, and FCIP tunneling. The document also reviews FOS enhancements, new 10Gbps blades, and concepts like NPIV and NPV.
Achieving the Ultimate Performance with KVMDevOps.com
Building and managing a cloud is not an easy task. It needs solid knowledge, proper planning and extensive experience in selecting the proper components and putting them together.
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. Join this webinar to learn how to get the most out of your KVM cloud and how to optimize it for performance.
Join this webinar and learn:
Why performance matters and how to measure it properly?
What are the main components of an efficient new-age cloud?
How to select the right hardware?
How to optimize CPU and memory for ultimate performance?
Which network components work best?
How to tune the storage layer for performance?
Many companies build new-age KVM clouds, only to find out that their applications & workloads do not perform well. In this talk we’ll show you how to get the most out of your KVM cloud and how to optimize it for performance: You’ll understand why performance matters and how to measure it properly. We’ll teach you how to optimize CPU and memory for ultimate performance and how to tune the storage layer for performance. You’ll find out what are the main components of an efficient new-age cloud and which network components work best. In addition, you’ll learn how to select the right hardware to achieve unmatched performance for your new-age cloud and applications.
Venko Moyankov is an experienced system administrator and solutions architect at StorPool storage. He has experience with managing large virtualizations, working in telcos, designing and supporting the infrastructure of large enterprises. In the last year, his focus has been in helping companies globally to build the best storage solution according to their needs and projects.
Achieving the ultimate performance with KVM ShapeBlue
This document summarizes an presentation about achieving ultimate performance with KVM. It discusses optimizing hardware, CPU, memory, networking, and storage for virtual machines. The goal is the lowest cost per delivered resource while meeting performance targets. Specific optimizations mentioned include CPU pinning, huge pages, SR-IOV networking, virtio drivers, and bypassing the host for storage. It cautions that many performance claims use unrealistic benchmarks and hardware configurations unlike real-world usage.
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld
1. This document provides an overview and agenda for a presentation on vSphere 6.x host resource deep dive topics including compute, storage, and network.
2. It introduces the presenters, Niels Hagoort and Frank Denneman, and provides background on their expertise.
3. The document outlines the topics to be covered under each section, including NUMA, CPU cache, DIMM configuration, I/O queue placement, driver considerations, RSS and NetQueue scaling for networking.
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
In this deck from the Stanford HPC Conference, Mohan Potheri from VMware presents: Sharing High-Performance Interconnects Across Multiple Virtual Machines.
"Virtualized devices offer maximum flexibility: sharing of hardware between virtual machines, the use of VMware vMotion to handle migration and take snapshots. However, when performance is the most critical requirement there are other options. VMware Direct Path I/O delivers excellent performance, but only for a single virtual machine. Single root I/O virtualization (SR-IOV), on the other hand, offers the performance of pass-through mode while allowing devices to be shared by multiple virtual machines.
This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations."
Watch the video: https://youtu.be/-iYYmsBw8SU
Learn more: https://www.vmware.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
PCIe peer-to-peer communication can reduce bottlenecks between high-performance I/O devices like SSDs and networking cards by allowing them to transfer data directly without going through the CPU. PMC is developing an NVM Express NVRAM card using DRAM cache that is accessible via the NVMe block driver or custom character driver, and can achieve almost 1 million 4KB IOPS or 10 million 64B IOPS. The company has set up a test hardware and software environment using PCIe devices connected directly to CPU lanes running Debian Linux with custom kernel patches to demonstrate peer-to-peer capabilities.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Technical sales education enterprise- svc and ibm flash best practices updatesolarisyougood
- The document provides best practices for implementing IBM FlashSystem storage arrays behind an IBM SVC storage virtualization appliance.
- It covers SVC and FlashSystem updates, switch zoning, cabling considerations, basic SVC guidelines including supported release levels and port masking examples.
- The goal is to optimize performance when using FlashSystem with SVC by leveraging all SVC and FlashSystem ports and features like Real-time Compression and Easy Tier.
This document summarizes OpenStack Compute features related to the Libvirt/KVM driver, including updates in Kilo and predictions for Liberty. Key Kilo features discussed include CPU pinning for performance, huge page support, and I/O-based NUMA scheduling. Predictions for Liberty include improved hardware policy configuration, post-plug networking scripts, further SR-IOV support, and hot resize capability. The document provides examples of how these features can be configured and their impact on guest virtual machine configuration and performance.
Running Applications on the NetBSD Rump Kernel by Justin Cormack eurobsdcon
Abstract
The NetBSD rump kernel has been developed for some years now, allowing NetBSD kernel drivers to be used unmodified in many environments, for example as userspace code. However it is only since last year that it has become possible to easily run unmodified applications on the rump kernel, initially with the rump kernel on Xen port, and then with the rumprun tools to run them in userspace on Linux, FreeBSD and NetBSD. This talk will look at how this is achieved, and look at use cases, including kernel driver development, and lightweight process virtualization.
Speaker bio
Justin Cormack has been a Unix user, developer and sysadmin since the early 1990s. He is based in London and works on open source cloud applications, Lua, and the NetBSD rump kernel project. He has been a NetBSD developer since early 2014.
In this session, Boyan Krosnov, CPO of StorPool will discuss a private cloud setup with KVM achieving 1M IOPS per hyper-converged (storage+compute) node. We will answer the question: What is the optimum architecture and configuration for performance and efficiency?
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
The document discusses various ways to optimize storage performance for virtual machines, including:
1) Provisioning virtual disks using different QEMU emulated devices like virtio-blk and configuring the IOThread option to improve performance.
2) Performing NUMA pinning to ensure virtual CPUs, memory and I/O threads are placed on the same NUMA node as the host storage device.
3) Configuring virtual machine options like using raw block devices instead of image files, enabling the IOThread, and tuning QEMU and image file parameters to improve I/O performance.
Advanced performance troubleshooting using esxtopAlan Renouf
This document discusses using esxtop and resxtop tools to troubleshoot performance issues on VMware ESXi hosts. It provides 10 key things to know about esxtop counters and how they work. It then gives examples of using esxtop to troubleshoot common problems like CPU contention, memory issues, network throughput problems, and disk I/O latency. It also lists some other diagnostic tools that can be used along with esxtop.
This document discusses opportunities for Arm in data center and edge computing infrastructure. It outlines Arm's growing footprint in servers through partners like AWS, Ampere, Marvell, and provides an overview of the Neoverse roadmap. It also discusses how Arm can address markets like smartNICs and uCPE through integrated solutions with better performance and cost than x86.
Implementing SR-IOv failover for Windows guests during live migrationYan Vugenfirer
Presentation from KVM Forum 2020.
In the past, there were several attempted to enable live migration for VMs that are using SR-IOV NICs. We are going to discuss the recent development based on the SR-IOV failover feature in virtio specification and its implementation for the Windows guests. In this session, Annie Li and Yan Vugenfirer will provide an overview of the failover feature and discuss specifics of the Windows guest implementation.
Slides at OpenStack Summit 2017 Sydney
Session Info and Video: https://www.openstack.org/videos/sydney-2017/100gbps-openstack-for-providing-high-performance-nfv
This document discusses InfiniGuard's data protection solution and its advantages over other backup appliances. It highlights InfiniGuard's ability to provide fast restore times even for large datasets through its use of InfiniBox storage technology. The document also covers how InfiniGuard addresses modern threats like ransomware through immutable snapshots, logical air-gapping of backups, and a isolated forensic network to enable fast recovery from cyber attacks.
Využijte svou Oracle databázi na maximum!
Ondřej Buršík
Senior Presales, Oracle
Arrow / Oracle
The document discusses maximizing the use of Oracle databases. It covers topics such as resilience, performance and agility, security and risk management, and cost optimization. It promotes Oracle Database editions and features, as well as Oracle Engineered Systems like Exadata, which are designed to provide high performance, availability, security and manageability for databases.
Prezentace z webináře dne 10.3.2022
Prezentovali:
Jaroslav Malina - Senior Channel Sales Manager, Oracle
Josef Krejčí - Technology Sales Consultant, Oracle
Josef Šlahůnek - Cloud Systems sales Consultant, Oracle
Prezentace z webináře ze dne 9.2.2022
Prezentovali:
Jaroslav Malina - Senior Channel Sales Manager, Oracle
Josef Krejčí - Technology Sales Consultant, Oracle
Josef Šlahůnek - Cloud Systems Sales Consultant, Oracle
The document discusses Oracle Database Appliance (ODA) high availability and disaster recovery solutions. It compares Oracle Real Application Clusters (RAC), RAC One Node, and Standard Edition High Availability (SEHA). RAC provides automatic restart and failover capabilities for load balancing across nodes. RAC One Node and SEHA provide restart and failover, but no load balancing. SEHA is suitable for Standard Edition databases if up to 16 sessions are adequate and a few minutes of reconnection time is acceptable without data loss during failover.
This document discusses InfiniGuard, a data protection solution from Infinidat. It highlights challenges with current backup solutions including slow restore times. InfiniGuard addresses this by leveraging InfiniBox storage technology to achieve restore objectives. It provides fast, scalable backup and restore performance. InfiniGuard also discusses threats from server-side encryption attacks and how its immutable snapshots and isolated backup environment help provide cyber resilience against such threats.
This document discusses Infinidat's scale-out storage solutions. It highlights Infinidat's unique software-driven architecture with over 100 patents. Infinidat systems can scale to over 7 exabytes deployed globally across various industries. Analyst reviews show Infinidat receiving higher ratings than Dell EMC, HPE, NetApp, and others. The InfiniBox systems offer multi-petabyte scale in a single rack with high performance, reliability, and efficiency.
This document discusses Oracle Database 19c and the concept of a converged database. It begins with an overview of new features in Oracle Database 19c, including direct upgrade paths, new in-memory capabilities, and improvements to multitenant architecture. It then discusses the concept of a converged database that can support multiple data types and workloads within a single database compared to using separate single-purpose databases. The document argues that a converged database approach avoids issues with data consistency, security, availability and manageability between separate databases. It notes Oracle Database's support for transactions, analytics, machine learning, IoT and other workloads within a single database. The document concludes with an overview of Oracle Database Performance Health Checks.
The document discusses Infinidat's scale-out storage solutions. It highlights Infinidat's unique software-driven architecture with over 100 patents. Infinidat solutions can scale to multi-petabyte capacity in a single rack and provide high performance, reliability, and cost-effectiveness compared to other storage vendors. The document also covers Infinidat's flexible business models, replication capabilities, and easy management tools.
The document discusses Oracle's Database Options Initiative and how it can help organizations address challenges in a post-pandemic world. It outlines bundles focused on security & risk resilience, operational resiliency, cost optimization, and performance & agility. Each bundle contains various Oracle database products and capabilities designed to provide benefits like reduced costs, increased availability, faster performance, and enhanced security. The document also provides information on specific products and how they address needs such as disaster recovery, data protection, database management, and query optimization.
Oracle's Data Protection Solutions Will Help You Protect Your Business Interests
The document discusses Oracle's data protection solutions, specifically the Oracle Recovery Appliance. The Recovery Appliance provides continuous data protection for Oracle databases with recovery points of less than one second. It offers faster restore performance compared to generic data protection appliances. The Recovery Appliance fully integrates with Oracle databases and offers features like real-time data validation and monitoring of data loss exposure.
The document discusses strategies for protecting data, including:
1. Implementing a well-defined data protection architecture using Oracle Database security controls and services like Data Safe to assess risks, discover sensitive data, and audit activities.
2. Using high availability technologies like Oracle Real Application Clusters and disaster recovery options like Data Guard and GoldenGate to ensure redundancy and meet recovery objectives.
3. Addressing challenges with traditional backup and restore approaches and the need for a new solution given critical failures and costs of $2.5M per year to correct.
OCI Storage Services provides different types of storage for various use cases:
- Local NVMe SSD storage provides high-performance temporary storage that is not persistent.
- Block Volume storage provides durable block-level storage for applications requiring SAN-like features through iSCSI. Volumes can be resized, backed up, and cloned.
- File Storage Service provides shared file systems accessible over NFSv3 that are durable and suitable for applications like EBS and HPC workloads.
This document discusses Oracle Cloud Infrastructure compute options including bare metal instances, virtual machine instances, and dedicated hosts. It provides details on instance types, images, volumes, instance configurations and pools, autoscaling, metadata, and lifecycle. Key points covered include the differences between bare metal, VM, and dedicated host instances, bringing your own images, customizing boot volumes, using instance configurations and pools for management and autoscaling, and accessing instance metadata.
Exadata z pohledu zákazníka a novinky generace X8M - 1. částMarketingArrowECS_CZ
Oracle's Exadata X8M is a new database platform that provides the best performance for running Oracle Database. It uses a scale-out architecture with optimized compute, storage, and networking resources. New features include shared persistent memory that provides latency of 19 microseconds and speeds up log writes by 8x. Exadata X8M also delivers 3x more throughput, 2x more IOPS, and 5x lower latency than competing all-flash arrays. It offers the highest database performance scaling linearly with additional racks.
Oracle Cloud Infrastructure (OCI) provides Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) through a global network of 29 regions. OCI offers high-performance computing resources, storage, networking, security, and edge services to support traditional and cloud-native workloads. Pricing for OCI is consistently lower than other major cloud providers for equivalent services, with flexible payment models and usage-based pricing.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
AC Atlassian Coimbatore Session Slides( 22/06/2024)apoorva2579
This is the combined Sessions of ACE Atlassian Coimbatore event happened on 22nd June 2024
The session order is as follows:
1.AI and future of help desk by Rajesh Shanmugam
2. Harnessing the power of GenAI for your business by Siddharth
3. Fallacies of GenAI by Raju Kandaswamy
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
How Netflix Builds High Performance Applications at Global ScaleScyllaDB
We all want to build applications that are blazingly fast. We also want to scale them to users all over the world. Can the two happen together? Can users in the slowest of environments also get a fast experience? Learn how we do this at Netflix: how we understand every user's needs and preferences and build high performance applications that work for every user, every time.
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
3. Hardware Virtualization
• Guest Virtual Machines run on top of a
Host Machine
• Virtual machine acts like a real
computer with an operating system and
devices
• Virtual hardware – CPUs, Memory, I/O
• The software or firmware that creates a
virtual machine on the host hardware is
called a hypervisor
HYPERVISOR
4. Virtualization types
• Guest OS is not modified. Same OS is spun as a VM
• Guest OS is not aware of virtualization. Devices emulated entirely.
• Hypervisor need to trap and translate privileged instructions
Fully Virtualized
• Guest OS is aware that it is running in virtualized environment
• Guest OS and Hypervisor communicate through “hyper calls” for improved
performance and efficiency
• Guest OS uses a front-end driver for I/O operations
• Example : Juniper vRR, vMX
Para Virtualized
• Virtualization aware hardware (processors, NICs etc)
• Intel VT-x/VT-d/vmdq, AMD-V
• Example: Juniper VMX
Hardware
assisted
7. Virtual and Physical MX
PFE vPFE
Microcode
TRIO x86
CONTROL
PLANE
DATA PLANE
ASIC
PLATFORM
8. VMX Product
• Virtual JUNOS to be hosted on a VM
• Follows standard JUNOS release cycles
• Hosted on a VM, Bare Metal, Linux Containers
• Multi Core
• SR-IOV, virtIO, vmxnet3, …
VCP
(Virtualized Control Plane)
VFP
(Virtualized Forward Plane)
9. vMX Product Overview
VCPVFP
Physical NICs Management
traffic
Guest VM (Linux) Guest VM (FreeBSD)
Hypervisor: KVM, ESXi
Cores Memory
Bridge / vSwitch
Physical layer
PCI Pass through SR-IOV
VirtIO
Virtual Control Plane (VCP)
• JUNOS hosted in a VM. Offers all the capabilities
available in JUNOS
• Management remains the same as physical MX
• SMP capable
Virtual Forwarding Plane (VFP)
• Virtualized Trio software forwarding plane. Feature
parity with physical MX. Utilizes Intel DPDK libraries
• Multi-threaded SMP implementation allows for
elasticity
• SR-IOV capable for high throughput
• Can be hosted in VM or bare-metal
Orchestration
• vMX instance can be orchestrated through OpenStack
Kilo HEAT templates
• Package comes with scripts to launch vMX instance
11. CENTER CHIP (MQ, XM,..)
VMX Forwarding Model
LOOKUP CHIP (LU, XL…)
Queuing Chip
(QX, XQ,..)
FORWARDING WITH
TRIO ASICS on MX
DPDK
RIOT
DPDK
FORWARDING WITH
x86 on VMX
16. VMX QoS
LEVEL-1 LEVEL-
2
LEVEL-
3
PORT
S
I
X
Q
U
E
U
E
S
Q0
Q1
Q2
Q3
Q4
Q5
VLAN 1
VLAN 2
VLAN n
High
Medium
Low
§ Port:
§ Shaping-rate
§ VLAN:
§ Shaping-rate
§ 4k per IFD
§ Queues:
§ 6 queues
§ 3 priorities
§ 1 High
§ 1 medium
§ 4 low
§ Priority groups scheduling follows strict
priority for a given VLAN
§ Queues of the same priority for a given
VLAN use WRR
§ High and medium queues are capped at
transmit-rate
18. Revisit: X86 Server Architecture
CPU Socket 0 CPU Socket 1
Memory Memory
Memory Controller Memory Controller
PCI Controller
PCI Controller
NICs
NICs
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
Cor
e
19. vMX Environment
Description Value
Sample
system
configuration Intel
Xeon
E5-‐2667
v2
@
3.30GHz
25
MB
Cache. NIC:
Intel
82599
(for
SR-‐IOV
only)
Memory Minimum:
8
GB
(2GB
for
vRE,
4GB
for
vPFE,
2GB
for
Host
OS)
Storage Local
or
NAS
Sample system configuration
Sample configuration for number of CPUs
Use-‐cases Requirement
VMX
for up
to
100Mbps
performance
Min
#
of
vCPUs:
4
[1
vCPU for
VCP
and
3
vCPUs for
VFP]. Min
#
of
Cores:
2
[
1
core
for
VFP
and
1
core
for
VCP].
Min
memory
8G.
VirtIO NIC
only.
VMX
for
up
3G
of
performance
Min
#
of
vCPUs:
4
[1
vCPU for
VCP
and
3
vCPUs for
VFP]. Min
#
of
Cores:
4
[
3
cores
for
VFP,
1
core
for
VCP].
Min
memory
8G.
VirtIO or
SR-‐IOV
NIC.
VMX
for
3G
and
beyond
(assuming
min
2
ports
of
10G)
Min
#
of
vCPUs:
5
[1
vCPU for
VCP
and
4
vCPUs for
VFP]. Min
#
of
Cores:
5
[
4
cores
for
VFP,
1
core
for
VCP].
Min
memory
8G.
SR-‐IOV
only
NIC.
20. vMX Environment
Use-case 1: vMX instance up to 100Mbps
Min # of vCPUs: 4 [1 vCPU for VCP & 3 vCPUs for VFP]
Min # of Cores: 2 [1 core for VCP. 1 core for VFP]
Min memory 8G.
NIC: VirtIO is sufficient
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 1
VCP (Virtual
Control Plane) VFP (Virtual Forwarding Plane)
JUNOS
I/O – TX
& RX
VCPU 2
Worker
Host OS
CPU Socket
Use-case 2: vMX instance up to 3Gbps
Min # of vCPUs: 4 [1 vCPU for VCP & 3 vCPUs for VFP]
Min # of Cores: 4 [ 1 core for VCP. For VFP assume 2 port
1G/10G with a dedicated I/O core, 1 core for each Worker, 1
core for Host Interface ]
Min memory 8G.
NIC: VirtIO is sufficient. SR-IOV can also be used.
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 1
VCP (Virtual
Control Plane) VFP (Virtual Forwarding Plane)
JUNOS
I/O port 1
TX & RX
VCPU 3
Worker
Host OS
CPU Socket
I/O port 2
TX & RX
VCPU 2VCPU 1
Host
Interface
VCPU 0
Host
Interface
21. vMX Environment
Use-case 3: >3Gbps of throughput per instance
Assume 2 port 10G for I/O
Min # of vCPUs: 5 [1 vCPU for VCP & 4 vCPUs for VFP]
Min # of Cores: 5 [ 1 core for VCP. For VFP assume 2 port 10G
each with a dedicated I/O core, 1 core for each Worker, 1 core
for Host Interface]
Min memory 8G.
NIC: SR-IOV must be used
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
VCPU 0 VCPU 2
VCP (Virtual
Control Plane) VFP (Virtual Forwarding Plane)
JUNOS
I/O port 1
TX & RX
VCPU 3
Worker 1
Host OS
CPU Socket
I/O port 2
TX & RX
VCPU 2VCPU 0
Host
Interface
VCPU 3
Worker 2
VCPU n
Worker n
25. vLNS for business or wholesale - retail
• Separate vLNS instance
available for each
• Business VPN
• Retail ISP
• vLNS sized precisely to
serve required PPP and
L2TP sessions
CPE
Aggregation
Access
Node
PPP
PPPoE
L2TP
tunnel
LAC/
vLAC
Wholesale ISP
AAA server
Retail ISP
AAA server
Internet
vLNS In
Data
Centre
vLNS
Peer
Port
PPE
Core
side
port
Customer
VPN
Retail
ISP
26. SERVICE PROVIDER VMX USE CASE –
VIRTUAL PE (VPE)
DC/CO
Gateway
vPE
Provider
MPLS
cloud
CPE
L2
PE
L3
PE
CPE
Peering
Internet
SMBCPE
Pseudowire
L3VPN
IPSEC/Overlay
technology
Branch
Office
Branch
Office
DC
Fabric
27. vBNG for BNG near CO
vBNG
Deployment
Model
SP
Core
vBNG
Internet
OLT/DSLAM
DSL
or
Fiber
CPE
OLT/DSLAM
DSL
or
Fiber
CPE
OLT/DSLAM
DSL
or
Fiber
CPE
Central
Office
With
Cloud
Infrastructure
L2
Switch L2
Switch
• Business case is strongest when vBNG
aggregates 12K or fewer subscribers
Ethernet
Ethernet
28. Parts of a cloud
§ CGWR
Cloud gateway router
Could be router, server, switch
§ Switches
Switch features and overlay
technology as needed
§ Servers
Includes cabling between
servers and ToRs, mapping of
virtual instances to ports, core
capacity and virtual machines
3
Leaf
SpineSpine
Leaf
Cloud Gateways
vLNS
IP address 1.1.1.1
Other VNFs
IP address 2.2.2.2
Server-1
KVM
ge1 ge2 ge3 ge4
Leaf/TOR
NIC1 NIC2
vLNS
IP 3.3.3.3
Server-2
KVM
ge1 ge2 ge3 ge4
Leaf/TOR
NIC1 NIC2
Other VNFs
IP 4.4.4.4
29. VMX
with
service
chaining
– potential
vCPE use
case
vMX as
vCPE vFirewall vNATBranch
Office
Switch
Provider
MPLS
cloud
DC/CO
GW
Branch
Office
Switch
Provider
MPLS
cloud
DC/CO
Fabric
+
Contrail
overlay
vPE
Branch
Office
Switch
CPE
like
functionality
in
the
cloud
L2
PE
L2
PE
PE
Internet