This document summarizes the configuration and operation of Flannel CNI on a Kubernetes cluster. It shows logs from the Flannel pod running on the controller node, which sets up the overlay network and subnet leasing. It also shows logs from Flannel pods on worker nodes, which join the overlay network and configure iptables rules and IP masquerading for pod networking. The logs demonstrate how Flannel establishes connectivity between pods on different nodes using host-gateway mode.
This document discusses best practices for configuring Docker containers for Ruby on Rails applications hosted on DigitalOcean. It recommends using environment variables stored in files like Figaro or Dotenv for database credentials and other config, avoiding secrets in Dockerfiles, bundling gems before building images, and running the app command in the container.
SaltConf 2015: Salt stack at web scale: Better, Stronger, FasterThomas Jackson
This talk will discuss best practices for scaling SaltStack from thousands to hundreds of thousands of minions. But the devil is in the details and how do you scale without losing performance and making sure it all works? At LinkedIn we've learned some valuable lessons as we've grown our SaltStack footprint. We'll discuss how to run SaltStack, how to not run SaltStack, and how we've contributed to the Salt project to help make it better, stronger and faster.
Youtube: https://www.youtube.com/watch?v=qjFOY-QrW_k
This talk will focus on how we deploy and manage CentOS on our fleet at Facebook, and showcase challenges, best practices and lessons learned working with a deployment of hundreds of thousands of machines. We'll discuss challenges encountered over the years, tools that we developed to overcome them, the process used to integrate upstream updates, packaging tools and workflows and configuration management challenges. The talk is mostly focused on bare metal, but will cover some container best practices as well. We'll also focus on our interactions with the RPM, Yum, Anaconda and systemd projects to showcase how to work with the upstream community.
Puppet is an important part of Satellite 6, in this presentation, I'm introducing Puppet, how to quickly setup a Puppet server and a Puppet client, and finally how to write Puppet receipt in to goal of importing them into Satellite 6.
This document discusses Puppet workflows, including:
1. Basic and end-to-end Puppet workflows involving code repositories, Puppet Masters, agents, and VMs.
2. Options for node classification, certificate exchange, and provisioning VMs in end-to-end workflows.
3. Example workflows involving testing, rapid scaling, and planning considerations like users, timescales and legacy systems.
This document discusses strategies for packaging and implementing a community OpenStack distribution and provides examples of how it has been used to build various infrastructure platforms including: 1) migrating from a commercial to community OpenStack distribution; 2) building a GPU server farm for AI/analytics; 3) providing flexibility to run workloads on OpenStack or AWS; 4) building an IoT platform using OpenStack and AWS; and 5) creating a map data platform using large shared storage.
This document lists 43 bugs related to OpenStack documentation. Many of the bugs are documentation errors or inconsistencies in the OpenStack manuals. A few bugs note missing information, such as documentation for specific configurations or disaster recovery procedures. The bugs provide links to more details on the OpenStack bug tracking website.
Moving to Nova Cells without Destroying the WorldMike Dorman
This document discusses using cells in OpenStack Nova to scale deployments. Cells create a hierarchy with a top-level API cell and multiple compute cells. Each cell has its own database, message queue, and services. The document outlines planning the conversion, preparing the environment by expanding RabbitMQ and splitting services, configuring the compute and API cells, importing data to the API cell, and restarting services. It notes some caveats like limitations on certain notifications and objects between cells.
Puppet Camp London Fall 2015 - Service Discovery and PuppetMarc Cluet
This document discusses service discovery and how it can be implemented using Consul. It begins with an introduction to the presenter and overview of service discovery challenges. The main points are:
- Consul is a service discovery tool that allows services to register themselves and discover other services via API or DNS queries. It supports health checking and secure key-value storage.
- Consul uses agents running on each node that register services and perform health checks. Services can be discovered via the REST API or DNS queries. It provides a strongly consistent key-value store.
- Puppet can integrate with Consul for service discovery via Puppet modules, Hiera backend, or direct API access. This allows dynamically generating configurations from service information in
This document outlines an agenda for an Nginx essentials presentation. The presentation introduces concepts like HTTP protocols and web servers. It covers installing and configuring Nginx, including its HTTP module and features like load balancing and SSL. It also discusses debugging, customizing Nginx using modules like Tengine and OpenResty, and provides example use cases and references for further reading.
Puppet Camp Berlin 2015: Andrea Giardini | Configuration Management @ CERN: G...NETWAYS
In 2011, CERN decided to start using Puppet as main tool for development, machines configuration and provisioning as replacement of Quattor.
Since then the infrastructure has changed a lot, the "Agile infrastructure" project evolved is a series of tools and softwares that currently allow more than 10.000 nodes to be configured and provisioned following custom definitions.
Foreman, Git, Openstack and our homemade librarian Jens are only a few of the tools that will be described during the talk, that aims to give an overview about the current workflow for machines lifecycle at CERN.
This talk will cover how Puppet allows us to deal with several hundred of installations a day and, at the same time, provide highly customizable machine configurations for service owners.
Example of how you can leverage the salt event bus to support your infrastructure life-cycle for monitoring with Zabbix.
Enable workflows like when adding salt states to a minion automatically apply associated monitoring templates. or when decommissioning hosts, automatically remove them from Zabbix.
The document provides an overview of Consul administration at scale at Criteo. Key points:
- Criteo uses Consul for service discovery across 35k servers with 3200 services and 260k instances
- A dedicated team of 5 people manages Consul infrastructure and tools 24/7
- Automation is key to make Consul predictable at scale through standardized service registration, ACLs, and automation tools
- Metrics, logs, and monitoring are critical to detect issues with Consul and the services it manages
Scott Moore presented on performance testing HTTP/3 (QUIC). He provided background on the timeline of HTTP/3 development and how it aims to solve head of line blocking that still exists with HTTP/2 over TCP. Moore demonstrated performance tests of HTTP/1.1, HTTP/2, and HTTP/3 with and without network emulation of LTE and satellite connections. The results showed HTTP/3's potential to remove head of line blocking but also challenges around increased server load and UDP optimizations needed. Moore concluded that HTTP/3 benefits will be most noticeable on low bandwidth connections and additional protocols will need to support it as adoption increases over time.
This document provides instructions for using Mininet and the POX controller to simulate a software-defined networking (SDN) environment. It describes downloading a virtual machine appliance containing Mininet and checking the Linux setup. It then explains how to use Mininet to create a simple network topology with hosts, switches, and links. Basic OpenFlow commands are demonstrated like adding flows manually and viewing flow tables. Finally, it shows how to activate Wireshark for examining OpenFlow traffic.
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
Sides accompanying the tale of using saltcheck from the salt project to validate OS, app deployment, users, AWS resources, and more for multiple hadoop clusters.
Ccnp enterprise workbook v1.0 completed till weigthSagarR24
The document provides configuration instructions for Lab 1 tasks on switches SCOTSW01 through SCOTSW08. The tasks include defining hostnames, creating VLANs 99-120 and 666-999, suspending VLAN 999, creating a management interface on VLAN 99, and enabling Telnet and SSH access for the "admin" user. Users are instructed to configure these items on each switch as per the topology, using the provided configuration examples.
- The document describes configuring trunk links, VTP, and VLANs on switches SCOTSW01-08.
- Key steps include setting the VTP domain to "CCNP_ENTERPRISE", using VTP version 2 with MD5 authentication, configuring different VTP modes on switches, setting the native VLAN to 666, allowing only certain VLANs on trunks, and setting the MTU for VLAN 811.
- The objectives are to synchronize VLAN configurations between switches using VTP in the network and restrict VLAN traffic as specified.
- The document describes configuring trunk links, VTP, and VLAN settings on multiple switches.
- Key steps include configuring VTP version 2 with the domain "CCNP_ENTERPRISE" and password "cisco", setting different VTP modes on switches, configuring trunk ports, allowing only certain VLANs on trunks, and disabling VLAN 666 from being trunked.
- The configurations are demonstrated on switches SCOTSW01 through SCOTSW08 to match the given topology.
- The document describes configuring trunk links, VTP, and VLANs on switches SCOTSW01-08.
- Key steps include setting the VTP domain to "CCNP_ENTERPRISE", using VTP version 2 with MD5 authentication, configuring different VTP modes on switches, setting the native VLAN to 666, allowing only certain VLANs on trunks, and setting the MTU for VLAN 811.
- The objectives are to synchronize VLAN configurations between switches using VTP in the network and restrict VLAN traffic as specified.
- The document describes configuring trunk links, VTP, and VLANs on switches SCOTSW01-08.
- Key steps include setting the VTP domain to "CCNP_ENTERPRISE", using VTP version 2 with MD5 authentication, configuring different VTP modes on switches, setting the native VLAN to 666, allowing only certain VLANs on trunks, and setting the MTU for VLAN 811.
- The objectives are to synchronize VLAN configurations between switches using VTP in the network and secure the VTP communication with MD5 authentication.
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Ontico
Запускаем сервер (БД, Web-сервер или что-то свое собственное) и не получаем желаемый RPS. Запускаем top и видим, что 100% выедается CPU. Что дальше, на что расходуется процессорное время? Можно ли подкрутить какие-то ручки, чтобы улучшить производительность? А если параметр CPU не высокий, то куда смотреть дальше?
Мы рассмотрим несколько сценариев проблем производительности, рассмотрим доступные инструменты анализа производительности и разберемся в методологии оптимизации производительности Linux, ответим на вопрос за какие ручки и как крутить.
This document discusses optimizing Linux boot times on the Raspberry Pi. It begins with an overview of generic boot optimization concepts like identifying and measuring boot components, removing unnecessary functionality, and reordering initialization. It then presents a case study of optimizing boot for Raspbian on the Raspberry Pi through techniques like disabling unneeded services, assigning a static IP, using a minimal custom distro, and kernel optimizations like disabling initcalls and reducing the kernel size. The goal is to achieve an SSH login within 25 seconds instead of the original 30 seconds.
The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
The document describes configuring trunk links and VTP between 8 switches. Key steps include:
1. Configuring VTP domain, version 2, and MD5 password on all switches, with switches 1-2 as servers, 3-4 as transparent, and 5-6 as clients.
2. Configuring trunk ports between switches and allowing only active VLANs 99, 100, 110, 120, 666, 999.
3. Setting VLAN 811 MTU to 1400 and ensuring VLAN 666 is not native on trunks.
4. Configuring switches 1-2 as transparent to prevent synchronization of VLAN changes.
metadatacoreProperties.xml Model 2017-10-12T151537Z grv334.docxARIV4
metadata/coreProperties.xml
Model 2017-10-12T15:15:37Z grv334 Joshua 2017-11-06T00:29:11Z 1.14 R2017a
metadata/mwcoreProperties.xml
application/vnd.mathworks.simulink.model Simulink Model R2017a
metadata/mwcorePropertiesExtension.xml
9.2.0.534563
simulink/bdmxdata/DataTag0.mxarray
simulink/bdmxdata/DataTag1.mxarray
simulink/blockdiagram.xml
windows-1252
0.035000
on
off
UseLocalSettings
AllNumericTypes
UseLocalSettings
Overwrite
Run 1
120
win64
1
[-8.0, -8.0, 1616.0, 876.0]
0
Left
50
50
9
1
1
SimulinkTopLevel
0
[1562.0, 669.0]
1.0
[-8.75872824417138, -48.879249520001679]
GLUE2:PropertyInspector
Property Inspector
0
0
Right
640
480
AAAA/wAAAAD9AAAAAgAAAAAAAAC9AAAB+PwCAAAAA/sAAAAWAEQAbwBjAGsAVwBpAGQAZwBlAHQAMwEAAAAxAAAB+AAAAAAAAAAA+wAAABYARABvAGMAawBXAGkAZABnAGUAdAA0AAAAAAD/////AAAAAAAAAAD7AAAAUgBHAEwAVQBFADIAIAB0AHIAZQBlACAAYwBvAG0AcABvAG4AZQBuAHQALwBHAEwAVQBFADIAIAB0AHIAZQBlACAAYwBvAG0AcABvAG4AZQBuAHQAAAAAAP////8AAABjAP///wAAAAEAAAAAAAAAAPwCAAAAAfsAAABUAEcATABVAEUAMgA6AFAAcgBvAHAAZQByAHQAeQBJAG4AcwBwAGUAYwB0AG8AcgAvAFAAcgBvAHAAZQByAHQAeQAgAEkAbgBzAHAAZQBjAHQAbwByAAAAAAD/////AAAAKwD///8AAAZAAAAC1gAAAAEAAAACAAAAAQAAAAL8AAAAAQAAAAIAAAAP/////wAAAAAA/////wAAAAAAAAAA/////wEAAAAA/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wEAAAB5/////wAAAAAAAAAA/////wEAAADa/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wEAAAFT/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wEAAAL9/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA/////wAAAAAA/////wAAAAAAAAAA
on
UpdateHistoryNever
%<Auto>
%<Auto>
431807301
1.%<AutoIncrement:14>
off
off
disabled
off
off
off
AliasTypeOnly
on
on
off
off
off
on
off
off
on
on
on
off
off
off
on
on
on
off
off
off
on
on
off
off
off
normal
off
5
1
10
10
0
none
off
MATLABWorkspace
accel.tlc
accel_default_tmf
make_rtw
off
$bdroot
0U
$bdroot
[]
off
on
manual
normal
1
any
1000
auto
0
0
rising
0
off
off
off
off
...
Palestra realizada por Toronto Garcez aka torontux durante a 3a. edição da Nullbyte Security Conference em 26 de novembro de 2016.
Resumo:
O objetivo da apresentação é demonstrar de forma prática, o passo-a-passo para criar uma botnet com roteadores wi-fi e/ou embarcados em geral. Será demonstrado o desenvolvimento de um comando e controle e a utilização de firmwares "backdorados" para tornar dispositivos em bots.
Debugging the Cloud Foundry Routing TierVMware Tanzu
The document describes an issue where Gorouters in Cloud Foundry are experiencing high memory usage, too many open files, and a growing number of connections from HAProxy load balancers to the Gorouters but not from the Gorouters to application backends. This suggests a problem with unclosed connections on the Gorouters. Various troubleshooting steps are described, such as checking Gorouter logs and metrics, restarting Gorouters, and ruling out misbehaving route services. However, the root cause is not definitively identified.
- The document discusses various Linux system log files such as /var/log/messages, /var/log/secure, and /var/log/cron and provides examples of log entries.
- It also covers log rotation tools like logrotate and logwatch that are used to manage log files.
- Networking topics like IP addressing, subnet masking, routing, ARP, and tcpdump for packet sniffing are explained along with examples.
The document provides a lab workbook for CCNP Enterprise certification topics. It includes configurations and verification tasks for various labs covering VLANs, trunking, VTP, STP, RSTP, MSTP, DTP, etherchannel, HSRP, OSPF and more. The initial lab covers creating VLANs 99, 100, 110, 120 and 999 on switches, setting up a management interface on VLAN 99, and enabling Telnet and SSH access for the admin user.
The document contains instructions for configuring CCNP Enterprise lab switches. It includes steps to:
1. Define hostnames and create VLANs for management, servers, guest, office and parking on all switches.
2. Configure a management interface on VLAN 99 for each switch.
3. Enable Telnet and SSH access for the "admin" user to allow remote connections to the switches.
Presented at LISA18: https://www.usenix.org/conference/lisa18/presentation/babrou
This is a technical dive into how we used eBPF to solve real-world issues uncovered during an innocent OS upgrade. We'll see how we debugged 10x CPU increase in Kafka after Debian upgrade and what lessons we learned. We'll get from high-level effects like increased CPU to flamegraphs showing us where the problem lies to tracing timers and functions calls in the Linux kernel.
The focus is on tools what operational engineers can use to debug performance issues in production. This particular issue happened at Cloudflare on a Kafka cluster doing 100Gbps of ingress and many multiple of that egress.
This document describes how to configure an OpenStack environment with Distributed Virtual Router (DVR) functionality using VirtualBox virtual machines. It includes details on setting up 3 VMs for the controller, network, and compute nodes, installing OpenStack using scripts, configuring IP addresses and users, replicating the compute node, and verifying the DVR installation and environment.
This document describes setting up and testing ProxySQL for query routing and high availability with Percona XtraDB Cluster (PXC). It includes instructions for installing and configuring ProxySQL, adding backend PXC servers, creating query rules for routing, and testing read/write splitting and failover through sysbench tests. Failover is demonstrated by stopping one PXC node, and ProxySQL is shown routing queries to the remaining nodes and marking the failed node as offline in its status.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
KCD Costa Rica 2024 - Nephio para parvulitosVictor Morales
Nephio is an open source project donated by Google and recently added to the Linux Foundation Networking. Its main objective is to facilitate the deployment and management of Network Applications (such as 5G) on a large scale. This project allows Telecommunications companies to use practices such as GitOps and Cloud-Native in the control of their applications and that have been widely adopted by the industry.
CCOSS + KCD Mexico 2024 - Embracing GitOps in Telecom with NephioVictor Morales
Nephio is an open source project donated by Google and recently included as part of the Linux Foundation Networking projects umbrella. Its main objective is to facilitate the deployment and management of Network Applications (such as 5G) on large scale. This project allows Telecom companies to use well-known practices such as GitOps and Cloud-Native to onboard their applications.
Nephio is an open source project that allows companies to manage their networking applications on scale. This year, the community has worked hard to release its first Release which offers a new alternative to be considered.
Tips and tricks for contributing to an Open Source project.pptxVictor Morales
Contributing to any open source project could be overwhelming at the beginning, given there are some no-writing rules on them. But there are some tricks which can facilitate you during the process. This session provides some etiquette rules that I've learned on my Open Source journey as contributor and reviewer from several projects. The main takeaway of this will give the participant a set of best practices during the on-boarding process in any open source project.
Understanding the Cloud-Native origins.pptxVictor Morales
Cloud-Native technologies are the result of many technologies and efforts to deliver solutions efficiently. Virtualization technologies, Intent-Driven architectures, self-service models are just few events that have revolutionized the industry. These seem isolated events, but maybe after analyzing them better, we could predict some future, a future that can improve our career or our business. Through this session, I'll share some experiences collected through my last 10 years working with Cloud Technologies. I'll try to cover topics about the usage of open source technologies and explains why other industries, like Telecommunications, have been aggressively invested in them.
This presentation was used in "La Hora de Kubernetes" to share experiences acquired during my journey in the OPNFV community, as well as trends and challenges faced by the Telcos.
The document discusses Kubernetes networking and container networking interfaces (CNI). It provides an overview of the Kubelet and container runtime workflows for setting up pod networking using CNI plugins. Specific details are given on networking setup in ContainerD and CRI-O. A CNI plugin written in BASH is demonstrated. Container networking uses bridges, veth pairs, and CNI plugins to connect containers to networks. Performance implications of double tunneling with Kubernetes on OpenStack are also noted.
Removing Language Barriers for Spanish-speaking ProfessionalsVictor Morales
In 2020 the Apache Software Foundation Community published a survey[1] which suggests that language can be one of the major barriers to contribute to any open source project. According to some estimates[2] in Latin America, open source technologies will grow five times in the coming years. Talented professionals, students and enthusiasts demand access to documentation written in their own language. That's why the Spanish documentation team has been participating in different initiatives to help others to contribute into the translation process. During this session, it's going to be shared what the Kubernetes Spanish documentation team has been accomplished and walkthrough the process to translate and contribute to the CNCF documentation. The prime audience for this sessions are spanish-speaking professionals and enthusiasts willing to participate in improving the CNCF documentation. They will understand the workflow to submit documentation changes and help to participate in the localization process. [1] https://cwiki.apache.org/confluence/download/attachments/158865837/The%202020%20ASF%20Community%20Survey%20-%20Readout%20%281%29.pdf?api=v2 [2] http://www.latinamerica.tech/2019/11/12/latins-contribute-little-to-open-source-software/
How to contribute to an open source project and don’t die during the Code Rev...Victor Morales
Reviewing changes is an essential part of the software development. This process involves the collaboration of several team members who ensure to keep quality standards. In open source projects, the process can be overwhelming for newbies. Along this presentation, I will share experiences and best practices acquired a long of my years contributing to different open source projects, like OpenStack, Kubernetes, CNCF and OPNFV and how to improve that collaboration between contributors and reviewers.
This document discusses mutating admission webhooks in Kubernetes. It provides context on PNFs, VNFs, and CNFs. It then describes how a mutating admission webhook can be implemented to inject a generic NSE sidecar into pods. It outlines the prerequisites and provides links to example implementations of generating certificates, deploying webhook resources, and creating a MutatingWebhookConfiguration to deploy the webhook.
This document discusses Cloud Native Network Functions (CNFs) and provides an overview of the GW Tester project, which aims to provide an example CNF implementation. It includes links to resources that define CNFs and cloud native principles. The GW Tester project code is open source and focuses on portability, realism, and usefulness through the use of annotations, sidecars, and Helm charts to deploy CNFs on immutable infrastructure managed by an orchestrator using microservices.
Pod Sandbox workflow creation from DockershimVictor Morales
This slides were used to explain the K8s pod sandbox creation process used by Dockershim during the Cloud-Native MX meetup. During this presentation is clarified what Dockershim deprecation means and what are the "pause" containers?
These slides were used during a technical session for the Cloud-Native El Salvador community. It covers the basic Kubernetes components, some installers and main Kubernetes resources. For the demo, it was used the capabilites provided by the Horizontal Pod Autoscaler.
El desarrollo orientado hacia la nube es una realidad. Muchas empresas han reemplazado sus herramientas y modificado sus operaciones para obtener beneficios ofrecidos por este nuevo paradigma. Durante esta sesión se pretende abordar temas relacionados con el surgimiento de estas tecnologías. Entre los cuales destacan los distintos modelos de servicio y despliegue, estrategias para la adopción y el uso de herramientas existentes como Kubernetes.
Building cloud native network functions - outcomes from the gw-tester nsm imp...Victor Morales
The GW-Tester project is a set of tools created for testing GPRS Tunneling protocols. During the last Virtual Event, the journey to transform the GW-Tester to a Cloud-Native architecture was presented. In that session, we discussed some considerations from the Container's design to the CNI multiplexer implementation details. This session covers lessons learned and discovered during the Network Service Mesh (NSM) implementation. NSM offers a different approach compared to Multus and DANM to manage multiple network interfaces and this may result in Architectural changes on the CNF. The audience will get familiar with some considerations to take at the moment to consume NSM SDK. People from the ONAP, OPNFV and CNTT communities might find this information relevant to their projects.
Kubernetes uses Requests and Limits values to determine where and how to execute pods. This presentation pretends to cover these concepts besides Quality of Services classes. It also points to a demo that uses Virtlet to share CPU workload.
Casablanca has contributed to ONAP including developing test services and plugins for multi-cloud and Kubernetes environments. Some key contributions include:
1. A MultiCloud/K8S plugin written in Go that offers an API for interacting with cloud regions supporting Kubernetes.
2. A Kubernetes Reference Deployment (KRD) that provides a reference for deploying Kubernetes clusters satisfying ONAP requirements through Ansible playbooks.
3. Work on OVN4NFVK8S and a virtual firewall use case composed of packet generator, firewall, and traffic sink virtual functions to report traffic volumes to ONAP.
Kubernetes based Cloud-region support in ONAP to bring up VM and container ba...Victor Morales
This material was used during the ONAP DDF + OPNFV Plugfest 2019 in Paris to share the progress made on this project and the plans for next coming releases
Response & Safe AI at Summer School of AI at IIITHIIIT Hyderabad
Talk covering Guardrails , Jailbreak, What is an alignment problem? RLHF, EU AI Act, Machine & Graph unlearning, Bias, Inconsistency, Probing, Interpretability, Bias
Social media management system project report.pdfKamal Acharya
The project "Social Media Platform in Object-Oriented Modeling" aims to design
and model a robust and scalable social media platform using object-oriented
modeling principles. In the age of digital communication, social media platforms
have become indispensable for connecting people, sharing content, and fostering
online communities. However, their complex nature requires meticulous planning
and organization.This project addresses the challenge of creating a feature-rich and
user-friendly social media platform by applying key object-oriented modeling
concepts. It entails the identification and definition of essential objects such as
"User," "Post," "Comment," and "Notification," each encapsulating specific
attributes and behaviors. Relationships between these objects, such as friendships,
content interactions, and notifications, are meticulously established.The project
emphasizes encapsulation to maintain data integrity, inheritance for shared behaviors
among objects, and polymorphism for flexible content handling. Use case diagrams
depict user interactions, while sequence diagrams showcase the flow of interactions
during critical scenarios. Class diagrams provide an overarching view of the system's
architecture, including classes, attributes, and methods .By undertaking this project,
we aim to create a modular, maintainable, and user-centric social media platform that
adheres to best practices in object-oriented modeling. Such a platform will offer users
a seamless and secure online social experience while facilitating future enhancements
and adaptability to changing user needs.
An Internet Protocol address (IP address) is a logical numeric address that is assigned to every single computer, printer, switch, router, tablets, smartphones or any other device that is part of a TCP/IP-based network.
Types of IP address-
Dynamic means "constantly changing “ .dynamic IP addresses aren't more powerful, but they can change.
Static means staying the same. Static. Stand. Stable. Yes, static IP addresses don't change.
Most IP addresses assigned today by Internet Service Providers are dynamic IP addresses. It's more cost effective for the ISP and you.
Unblocking The Main Thread - Solving ANRs and Frozen FramesSinan KOZAK
In the realm of Android development, the main thread is our stage, but too often, it becomes a battleground where performance issues arise, leading to ANRS, frozen frames, and sluggish Uls. As we strive for excellence in user experience, understanding and optimizing the main thread becomes essential to prevent these common perforrmance bottlenecks. We have strategies and best practices for keeping the main thread uncluttered. We'll examine the root causes of performance issues and techniques for monitoring and improving main thread health as wel as app performance. In this talk, participants will walk away with practical knowledge on enhancing app performance by mastering the main thread. We'll share proven approaches to eliminate real-life ANRS and frozen frames to build apps that deliver butter smooth experience.
2. Victor Morales
• +15 yrs as a Software Engineer
• .NET, Java, python, Go programmer
• OpenStack, OPNFV, ONAP and CNCF
contributor.
https://about.me/electrocucaracha
3. Multicore Crisis
Named by Bob “SmoothSpan”
Warfield in 2007, the situation
in which the effects of Moore’s
Law have changed: while the
doubling of transistors per
chip continues, the by-product
is no longer faster processor
speeds but more cores per
chips instead.
https://smoothspan.com/2007/09/06/a-picture-of-the-multicore-crisis/
5. Fallacies of distributed computing
1. The network is reliable
2. Latency is zero
3. Bandwidth is infinite
4. The network is secure
5. Topology doesn't change
6. There is one administrator
7. Transport cost is zero
8. The network is homogeneous
https://blogs.oracle.com/developers/fallacies-of-distributed-systems
7. The Kubernetes
network model
• pods on a node can communicate with all pods on
all nodes without NAT
• agents on a node (e.g. system daemons, kubelet)
can communicate with all pods on that node
• pods in the host network of a node can
communicate with all pods on all nodes without
NAT
https://kubernetes.io/docs/concepts/cluster-
administration/networking/#the-kubernetes-
network-model
https://mrscriptkiddie.com/
what-is-network-address-
translationnat-working-
explained/
14. ssh worker01
(PIDs/net ns)
worker01
eth0
pod1
10.233.66.4/24
eth0
client pause
pod2
10.233.66.5/24
server pause
$ for name in $(sudo docker ps --filter "name=pod*" --format "{{.Names}}"); do
> echo "NAME:$name $(sudo docker inspect $name --format 'PID:{{.State.Pid}} CMD:{{.Path}}')"
> done
NAME:k8s_server_pod2_default_7e0da6ce-2693-448a-9def-55e25a53a9f8_0 PID:23231 CMD:sleep
NAME:k8s_client_pod1_default_7f3063da-2e86-466e-812b-b45b8791af60_0 PID:23173 CMD:sleep
NAME:k8s_POD_pod2_default_7e0da6ce-2693-448a-9def-55e25a53a9f8_0 PID:22936 CMD:/pause
NAME:k8s_POD_pod1_default_7f3063da-2e86-466e-812b-b45b8791af60_0 PID:22902 CMD:/pause
$ sudo lsns --type net
NS TYPE NPROCS PID USER COMMAND
4026531993 net 139 1 root /sbin/init
4026532219 net 1 892 root /usr/sbin/haveged --Foreground --verbose=1 -w 1024
4026532321 net 2 17306 root /pause
4026532401 net 2 17675 root /pause
4026532481 net 2 22902 root /pause
4026532549 net 2 22936 root /pause
10.0.2.254/24
eth0
worker01
4026532481
10.233.66.4/24
eth0
23173 22902
4026532549
10.233.66.5/24
23231 22936
eth0
4026531993
eth0 10.0.2.254/24
15. ssh worker01
(veths)
$ sudo nsenter -t 23173 -n ip a s eth0
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 22:3d:13:16:4d:91 brd ff:ff:ff:ff:ff:ff
inet 10.233.65.5/24 brd 10.233.66.255 scope global eth0
valid_lft forever preferred_lft forever
$ sudo nsenter -t 23231 -n ip a s eth0
3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 4e:45:9e:07:aa:73 brd ff:ff:ff:ff:ff:ff
inet 10.233.65.6/24 brd 10.233.66.255 scope global eth0
valid_lft forever preferred_lft forever
$ ip add show vethd2ff1e8f
10: vethd2ff1e8f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether de:71:37:62:8c:81 brd ff:ff:ff:ff:ff:ff link-netnsid 2
$ ip add show vethedf4daba
11: vethedf4daba@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether 6e:38:5d:c3:1c:fe brd ff:ff:ff:ff:ff:ff link-netnsid 3
worker01
4026532481
10.233.66.4/24
eth0@if10
23173 22902
4026532549
10.233.66.5/24
23231 22936
4026531993
eth0 10.0.2.254/24
eth0@if11
vethd2ff1e8f vethedf4daba
de:71:37:62:8c:81 6e:38:5d:c3:1c:fe
16. ssh worker01
(bridge)
worker01
4026532481
10.233.66.4/24
eth0@if10
23173 22902
4026532549
10.233.66.5/24
23231 22936
4026531993 eth0
10.0.2.254/24
eth0@if11
vethd2ff1e8f vethedf4daba
cni0
$ ip addr show cni0
7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 2e:c5:8a:c4:ad:59 brd ff:ff:ff:ff:ff:ff
inet 10.233.66.1/24 brd 10.233.66.255 scope global cni0
valid_lft forever preferred_lft forever
$ brctl show cni0
bridge name bridge id STP enabled interfaces
cni0 8000.1af5b56bb7bd no veth08082f56
vethd2ff1e8f
vethe6c8889e
vethedf4daba
$ brctl showmacs cni0
port no mac addr is local? ageing timer
1 12:ec:de:82:22:e9 yes 0.00
1 12:ec:de:82:22:e9 yes 0.00
2 42:ed:db:49:ce:24 yes 0.00
2 42:ed:db:49:ce:24 yes 0.00
4 6e:38:5d:c3:1c:fe yes 0.00
4 6e:38:5d:c3:1c:fe yes 0.00
2 9e:93:b1:9b:10:1f no 1.16
3 de:71:37:62:8c:81 yes 0.00
3 de:71:37:62:8c:81 yes 0.00
1 ea:56:ef:33:2a:9a no 4.16
de:71:37:62:8c:81 6e:38:5d:c3:1c:fe
10.233.66.1/24
17. ssh worker01 (routes)
worker01
4026532481
10.233.66.4/24
eth0@if10
23173 22902
4026532549
10.233.66.5/24
23231 22936
4026531993
eth0 10.0.2.254/24
eth0@if11
vethd2ff1e8f vethedf4daba
cni0
$ sudo nsenter -t 23173 -n ping -c 1 10.233.66.5
PING 10.233.66.5 (10.233.66.5) 56(84) bytes of data.
64 bytes from 10.233.66.5: icmp_seq=1 ttl=64 time=0.154 ms
--- 10.233.66.5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms
$ sudo nsenter -t 23173 -n ip route
default via 10.233.66.1 dev eth0
10.233.64.0/18 via 10.233.66.1 dev eth0
10.233.66.0/24 dev eth0 proto kernel scope link src 10.233.66.4
$ ip route
default via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.254 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.254
10.0.2.1 dev eth0 proto dhcp scope link src 10.0.2.254 metric 100
10.10.16.0/24 dev eth1 proto kernel scope link src 10.10.16.4
10.233.64.0/24 via 10.0.2.56 dev eth0
10.233.65.0/24 via 10.0.2.14 dev eth0
10.233.66.0/24 dev cni0 proto kernel scope link src 10.233.66.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
linkdown
10.233.66.1/24
19. Demo 2
worker01
eth0
pod1
10.233.66.6/24
eth0
client
$ kubectl get pods -o custom-columns="NAME:metadata.name,IP:status.podIP,NODE:spec.nodeName"
NAME IP NODE
pod1 10.233.66.6 worker01
pod2 10.233.65.5 worker02
worker02
eth0
pod2
10.233.65.5/24
eth0
server
20. ssh worker01 (routes)
worker01
4026532481
10.233.66.6/24
eth0@if12
24330 24200
4026531993
eth0
10.0.2.254/24
veth24e469fc
cni0
$ sudo nsenter -t 24330 -n traceroute 10.233.65.5
traceroute to 10.233.65.5 (10.233.65.5), 30 hops max, 60 byte packets
1 10.233.66.1 (10.233.66.1) 2.161 ms 1.855 ms 1.775 ms
2 10.0.2.14 (10.0.2.14) 1.719 ms 1.603 ms 1.515 ms
3 10.233.65.5 (10.233.65.5) 1.447 ms 1.336 ms 3.424 ms
$ sudo nsenter -t 23173 -n ip route
default via 10.233.66.1 dev eth0
10.233.64.0/18 via 10.233.66.1 dev eth0
10.233.66.0/24 dev eth0 proto kernel scope link src 10.233.66.4
$ ip route
default via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.254 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.254
10.0.2.1 dev eth0 proto dhcp scope link src 10.0.2.254 metric 100
10.10.16.0/24 dev eth1 proto kernel scope link src 10.10.16.4
10.233.64.0/24 via 10.0.2.56 dev eth0
10.233.65.0/24 via 10.0.2.14 dev eth0
10.233.66.0/24 dev cni0 proto kernel scope link src 10.233.66.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
administration
10.0.2.0/24
worker02
4026532554
10.233.65.5/24
eth0@if11
22403 22281
4026531993
eth0
10.0.2.14/24
vethf6de8b6e
cni0
10.233.66.1/24 10.233.65.1/24
23. ssh worker01(routes)
$ ip route
default via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.254 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.254
10.0.2.1 dev eth0 proto dhcp scope link src 10.0.2.254 metric 100
10.10.16.0/24 dev eth1 proto kernel scope link src 10.10.16.4
10.233.64.0/24 via 10.233.64.0 dev flannel.1 onlink
10.233.65.0/24 via 10.233.65.0 dev flannel.1 onlink
10.233.66.0/24 dev cni0 proto kernel scope link src 10.233.66.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
$ ip neigh show dev flannel.1
10.233.64.0 lladdr fe:f7:38:5b:0a:4d PERMANENT
10.233.65.0 lladdr a2:c1:bf:3d:c9:7b PERMANENT
$ bridge fdb show dev flannel.1
fe:f7:38:5b:0a:4d dst 10.0.2.56 self permanent
a2:c1:bf:3d:c9:7b dst 10.0.2.14 self permanent
$ ip -d link show flannel.1
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether f6:22:6b:4b:11:48 brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 1 local 10.0.2.207 dev eth0 srcport 0 0 dstport 8472 nolearning ttl inherit ageing 300 udpcsum noudp6zerocsumtx
noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
24. Virtual eXtensible Local Area Networking
(VXLAN)
The VXLAN protocol is a tunnelling protocol designed to solve the
problem of limited VLAN IDs (4096) in IEEE 802.1q. With VXLAN the
size of the identifier is expanded to 24 bits (16777216).
https://www.kernel.org/doc/html/latest/networking/vxlan.html
https://www.beyondcli.com/101/vxlan-vsphere-vcns-vs-nsx-for-vsphere/
26. Slow Responses
https://pragprog.com/titles/mnee2/release-it-second-edition/
Let threads block for minutes before throwing exceptions.
The blocked thread can’t process other transactions, so
overall capacity is reduced.
• Slow responses trigger Cascading Failures
• For websites, slow responses cause more traffic
• Consider Fail Fast
• Hunt for memory leaks or resource contention