5/13/13 presentation to Austin DevOps Meetup Group, describing our system for deploying 15 websites and supporting services in multiple languages to bare redhat 6 VMs. All system-wide software is installed using RPMs, and all application software is installed using GIT or Tarball.
The document discusses building a lightweight Docker container for Perl by starting with a minimal base image like BusyBox, copying just the Perl installation and necessary shared libraries into the container, and setting Perl as the default command to avoid including unnecessary dependencies and tools from a full Linux distribution. It provides examples of Dockerfiles to build optimized Perl containers from Gentoo and by directly importing a tarball for minimal size and easy distribution.
Fun with containers: Use Ansible to build Docker imagesabadger1999
Docker allows deploying applications in isolated containers. Ansible is useful for building Docker images because it provides consistency and portability for configuring containers in the same way as configuring hosts. Ansible roles from Galaxy can be used to try applications before deploying them by building Docker images configured with Ansible plays that include the roles.
Fabric is a Python library and command-line tool that allows users to automate and streamline SSH administration tasks like application deployment or systems administration. It provides functions for executing remote shell commands, uploading/downloading files, and other basic SSH operations. Fabric can be used from Python scripts or via the command line.
The document outlines an 90 minute introduction to Ansible using Docker. It discusses setting up the environment with Docker, using ad-hoc commands and playbooks to automate tasks like installing Apache and configuring variables. Exercises demonstrate inventory management, templating configurations with Jinja2, and other core Ansible concepts. The document provides an overview but does not cover more advanced topics like dynamic inventory, roles, writing custom modules, or Ansible Tower.
Getting instantly up and running with Docker and SymfonyAndré Rømcke
A look into how you can start to use Docker today with ready made setup with php7, nginx, redis, blackfire and so on. How you may extend it, and integrating it into your continuous integration workflow, and how you can setup a continuous deployment workflow using for instance Travis-CI.
Quicklink: https://legacy.joind.in/19070
Controlling multiple VMs with the power of PythonYurii Vasylenko
This document discusses using Python and related libraries to remotely control and manage multiple virtual machines. It recommends using PySphere and related tools to connect to VMWare servers, power on/off VMs, create snapshots, execute commands and programs on VMs, and more. The document provides instructions for setting up the necessary Python environment and configuring connections to VMWare servers. It also includes code examples for performing basic tasks like connecting to a VM, managing snapshots, running long-running tasks asynchronously, and executing programs on VMs with graphical interfaces.
이미지 기반의 배포 패러다임 Immutable infrastructureDaegwon Kim
- The document discusses immutable infrastructure and immutable images in cloud computing.
- Immutable infrastructure uses configuration management tools like Chef and Docker to build stateless, reproducible server images.
- When servers are deployed from these images, they are configured automatically and can be replaced easily without losing state.
This document provides an overview of IT automation using Ansible. It discusses using Ansible to automate tasks across multiple servers like installing packages and copying files without needing to login to each server individually. It also covers Ansible concepts like playbooks, variables, modules, and vault for securely storing passwords. Playbooks allow defining automation jobs as code that can be run on multiple servers simultaneously in a consistent and repeatable way.
Using Capifony for Symfony apps deployment (updated)Žilvinas Kuusas
My presentation from the talk about Symfony apps deployment I gave at Kaunas PHP meetup.
Capistrano is an open source tool for running scripts on multiple servers. Capifony - set of instructions called “recipes” for Symfony applications deployment.
Built to make your job a lot easier.
aptly is a swiss army knife for Debian repository management: it allows to mirror remote repositories, take snapshots, pull new versions of packages along with dependencies, publish snapshots.
http://www.aptly.info/
This document discusses repetitive system administration tasks and proposes Ansible as a solution. It describes how Ansible works using agentless SSH to automate tasks like software installation, configuration, and maintenance across multiple servers. Key aspects covered include Ansible's inventory, modules, playbooks, templates, variables, roles and Docker integration. Ansible Tower is also introduced as a GUI tool for running Ansible jobs. The document recommends Ansible for anyone doing the same tasks across multiple servers to gain efficiencies over manual processes.
A revamped version of the Ansible intro talk from February 2015, brought up-to-date for the January Ansible meetup in Berlin.
Join our group: https://www.meetup.com/Ansible-Berlin
Title: Ansible, best practices.
Ansible has taken a prominent place in the configmanagement world. By now many people involved in DevOps have taken a look at it, or done a first project with it. Now it is time to step back and look at quality and craftmanship. Bas Meijer, Ansible ambassador, will talk about Ansible best practices, and will show tips, tricks and examples based on several projects.
About the speaker
Bas is a systems engineer and software developer and wasted decades on latenight hacking. He is currently helping out 2 enterprises with continuous delivery and devops.
The last decade belonged to virtual machines and the next one belongs to containers. CoreOS is a new Linux distribution designed specifically for application containers and running them at scale. This talk will examine all the major components of CoreOS (etcd, fleet, docker, systemd) and how these components work together.
This document discusses using Docker and Java on a Raspberry Pi. It provides instructions for installing Docker on a Raspberry Pi and creating Dockerfiles to run a Tomcat application container from a Java WAR file. It also discusses using Docker for continuous delivery by building a Docker image registry to version and distribute application containers.
Deploying with Super Cow Powers (Hosting your own APT repository with reprepro)Simon Boulet
This document discusses using reprepro to create and manage an APT repository for hosting custom packages and configurations. Reprepro allows syncing packages from external repositories, resigning packages with a custom key, and distributing packages to different environments like development, staging, and production. Configurations can be packaged and deployed per-environment to simplify management across suites. Integrating the custom repository with configuration management tools like Ansible promotes conformity.
Continuous Infrastructure: Modern Puppet for the Jenkins Project - PuppetConf...Puppet
This document summarizes Tyler Croy's presentation on managing the Jenkins infrastructure using Puppet. It describes how the infrastructure evolved from an unmanaged setup at Sun/Oracle to using masterless Puppet and eventually Puppet Enterprise. Key aspects covered include managing services, hardware, code layout, testing, and deployment process. Special thanks are given to Puppet Labs for their support of the project.
Making environment for_infrastructure_as_codeSoshi Nemoto
The document provides instructions for setting up an environment for infrastructure as code using tools like Vagrant, Ansible, and Fabric. It details steps to install the necessary tools, create a Vagrant machine, edit configuration files to configure the Vagrant IP address and SSH keys, and then provides a test command to validate the Fabric deployment is working properly.
Deploying and maintaining your software with RPM/APTJoshua Thijssen
The document describes a conference about deploying and maintaining software with RPM and APT package managers. The conference will take place on April 16-17, 2011 in Antwerp, Belgium and the URL http://joind.in/3315 provides additional information about the event.
Um die Bedeutung moderner Cloud-Technologien einschätzen zu können, werden zunächst Grundlagen herkömmlicher Cluster-Architekturen behandelt. Darunter zählen Konzepte wie vertikale und horizontale Skalierung, Load-Balancing, Storage-Arten, usw.
Lessons learned running large real-world Docker environmentsAlois Mayr
This document summarizes lessons learned from running large Docker environments in three or fewer sentences per section:
1. Dependencies between services can break architecture if not properly versioned.
2. A hardware defect in a single network card caused retransmissions under heavy load, affecting inter-container communication.
3. Logs from containers consumed all disk space when log management was not configured, preventing new containers from running.
4. Slowdowns occurred when a orchestration system stored excessive versions of services due to configuration.
5. Massive load testing exposed dependencies between over 800 billion components, requiring automation to analyze problems at scale.
- Tero Niemistö discusses using Docker in the Finnish public sector, where there are many regulatory requirements around data security, privacy, and infrastructure. This results in obstacles to building Docker solutions.
- He outlines some of the key requirements around auditing, server compliance, data residency, personnel clearances, and more. He then provides examples of processes for securely building, deploying, and managing containers within this regulated environment.
- Key aspects of their approach include building secure containers, automating compliance checks, duplicating systems across secure server rooms, and scanning for vulnerabilities at every stage of the development process.
Looking at how people, with current deployments, can start using docker with out having to replace anything. Also giving a migration path that allows testing the separate pieces and migrating over slowly without painting yourself into a corner. Also covering why you might want to do this and the problems it may help to solve.
Solving Real World Production Problems with DockerMarc Campbell
The document discusses securing the Docker delivery and execution process. It recommends choosing images carefully, scanning Dockerfiles, enabling Docker Content Trust to verify image integrity and publisher, and running the Docker Benchmark for Security to audit container configurations. The document outlines best practices for building, delivering, and executing Docker containers in a production environment.
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
This document discusses Docker Inc. developer relations manager Patrick Chanezon's work programming the world with Docker. The key points discussed are:
- Patrick Chanezon works at Docker Inc. in developer relations and aims to program the world with Docker.
- Docker allows for platforms and networks to be programmed through containers and orchestration, enabling tools for mass innovation across industries.
- Docker 1.12 introduces built-in orchestration through Swarm mode and the Docker Service API, allowing for self-organizing and self-healing container orchestration without external dependencies.
Jenkins and Chef: Infrastructure CI and Automated DeploymentDan Stine
This presentation discusses two key components of our deployment pipeline: Continuous integration of Chef code and automated deployment of Java applications. CI jobs for Chef code run static analysis and then provision, configure and test EC2 instances. Release jobs publish new cookbook versions to the Chef server. Deployment jobs identify target EC2 and VMware nodes and orchestrate Chef client runs. The flexibility of Jenkins is essential to our overall delivery architecture.
Adopt DevOps philosophy on your Symfony projects (Symfony Live 2011)Fabrice Bernhard
This is the presentation given at the Symfony Live 2011 conference. It is an introduction to the new agile movement spreading in the technical operations community called DevOps and how to adopt it on web development projects, in particular Symfony projects.
Plan of the slides :
- Configuration Management
- Development VM
- Scripted deployment
- Continuous deployment
Tools presented in the slides:
- Puppet
- Vagrant
- Fabric
- Jenkins / Hudson
This document provides an overview and introduction to Node.js. It covers the basics of Node.js including setting up the environment, creating a first application, using the Node Package Manager (NPM), and an introduction to key concepts like asynchronous programming with callbacks and events. The course appears to be targeted at web developers and teaches additional frameworks that can be used with Node.js like Express.js, MongoDB, and Angular.js.
The document discusses using Fabric for deployment and system administration tasks across multiple servers. It provides examples of Fabric configuration, defining roles for servers, writing tasks to run commands on servers, and how to structure tasks for a full deployment workflow. Fabric allows running commands remotely via SSH and provides tools for task composition and failure handling.
Capistrano deploy Magento project in an efficient waySylvain Rayé
Deploying a Magento project can be very a long and laborious task with some risks of errors. Having the good tool to prevent such a pain like Capistrano will help you to automatize such a process. Thanks such a tool you may deploy a release of your Magento project in less than 5 minutes.
Robbie Jerrom presents on Puppet, an open source configuration management tool. Puppet takes a declarative approach, allowing users to define what a system configuration should be rather than how to achieve it. Everything in Puppet is defined as a resource like packages, files, and services. Puppet uses a client-server model to manage configurations across multiple systems from a central Puppet master. It provides an easier way to deploy and manage software compared to traditional scripting approaches.
Puppet Primer, Robbie Jerrom, Solution Architect VMwaresubtitle
Robbie Jerrom presents on Puppet, an open source configuration management tool. Puppet takes a declarative approach, allowing users to define what a system configuration should be rather than how to achieve it. Everything in Puppet is defined as a resource like packages, files, and services. Puppet operates in a client-server model and uses modules to define common configurations that can be applied across nodes. It aims to provide an easier way to deploy and manage software compared to traditional script-based approaches.
Aucklug slides - desktop tips and tricksGlen Ogilvie
This document provides guidance on improving productivity when using Linux as a workstation. It recommends:
- Using SSH keys and agents for secure remote access.
- Familiarizing yourself with useful desktop applications and command line tools like terminals, editors, and clipboards.
- Managing passwords securely with a tool like Keepass and connecting devices with KDE Connect.
- Customizing Bash with configurations, shortcuts, syntax highlighting and tab completion to optimize the command line experience.
- Automating system setup and maintenance using tools like Ansible.
- Keeping organized notes on configurations, applications, and solutions to issues.
This document discusses automation from physical infrastructure to network security and DevOps using Ansible. It begins with an introduction and overview, then discusses:
- How Ansible can automate tasks across multiple platforms including cloud, Windows, virtualization, containers, network devices and more using its extensive module library.
- Examples of using Ansible playbooks to automate tasks like deploying applications, managing configurations, continuous delivery, security and compliance on servers, infrastructure, applications and other IT components.
- How Ansible's automation engine works using concepts like playbooks, modules, plugins, inventories to declaratively define the desired state and automate repetitive tasks.
This document provides an introduction and overview of Ansible automation from physical to NetSecDevOps. It discusses how Ansible provides simple yet powerful agentless deployment of applications and management of configurations. It is human-readable automation that allows entire teams to use and contribute. Ansible has cross-platform support without agents and uses OpenSSH, WinRM, APIs or Netconf. More than 1650 modules are included to automate tasks across clouds, virtualization, containers, networks, notifications and more. Playbooks ensure perfect application description and version control. Dynamic inventories capture servers regardless of infrastructure. Ansible allows automation from development to operations.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
- The document discusses using Fabric and Boto for automating tasks in cloud computing environments. Fabric allows running Python scripts and commands over SSH, while Boto is the Python API for interacting with AWS services like EC2.
- Examples are provided of writing basic Fabric files with tasks to run commands on remote servers. Key features covered include defining host groups with roles, enabling parallel execution of certain tasks, and setting failure handling modes.
- Automating tasks with Fabric and Boto can improve efficiency, consistency, and manageability of cloud infrastructure and deployments.
The document discusses using Puppet and Vagrant together to create a test environment for infrastructure configuration. Vagrant allows setting up and provisioning virtual machines quickly, while Puppet configures the desired state of systems. The demo project uses Vagrant to launch a CentOS virtual machine and Puppet to configure it based on roles like webserver or database.
Cloud meets Fog & Puppet A Story of Version Controlled InfrastructureHabeeb Rahman
Talk at rootconf - A conference at Bangalore for sysadmins.
Gist of the talk:-
Puppet is a great configuration management tool and git is great at version controlling.AWS lets you create instances in few clicks. But when it comes to large deployments only automation(where tools come together) can make you productive and happy. I will take you through following.. Fog - The Ruby cloud services library and how it helps you to create vendor neutral cloud deployments, Puppet- Multi region puppet masters, Ruby- How Ruby pulls the strings together in EC2/ELB/RDS creation, Security group creation, IP authorization, Route53 DNS etc, Git- how we use git to version control deployment configs/configurations.
Virtualize and automate your development environment for fun and profitAndreas Heim
The document discusses using Vagrant to virtualize and automate development environments. Vagrant allows developers to create identical virtual environments that match production. This ensures environments are the same across operating systems and developers. Vagrant uses automation tools like Chef and Puppet to configure environments. It addresses challenges like different dependency versions and allows quick resets. It advocates treating environments as code to make them documented, versioned and easily shared.
This document discusses using Fabric for Python application deployment and configuration management. It provides an overview of Fabric basics like tasks, roles, and environments. It also describes using Fabric for common operations like code deployment, database migrations, and managing server growth. Key advantages of Fabric include its simple task-based interface and ability to control multiple servers simultaneously. The document provides an example of using Fabric for a full deployment process including pushing code, running migrations, and restarting processes.
This document discusses Docker, an open source project that automates the deployment of applications inside software containers. It begins by describing common problems in application deployment and how virtual machines address some issues but introduce overhead. It then summarizes the history and rapid growth of Docker since its launch in 2013. The rest of the document dives into technical aspects of Docker like how images and containers work, comparisons to virtual machines, security considerations, the Docker workflow, and how Docker relates to DevOps and continuous delivery practices.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
This two-day training covers Docker concepts including installation, working with containers and images, building images with Dockerfiles, and integrating Docker with OpenStack. Day one focuses on the Docker introduction, installation, containers, images, Dockerfiles, and using Nova to run Docker containers as compute instances. It also covers using Glance as a Docker image registry. Day two covers Docker clustering with Kubernetes, networking, Docker Hub, case studies, and the Docker source code. It concludes with developing platforms and running Hadoop on Docker containers.
How Puppet Enables the Use of Lightweight Virtualized Containers - PuppetConf...Puppet
The document summarizes how Puppet can be used to enable lightweight virtualized containers by configuring applications and their dependencies into immutable container images during the build process. It compares deploying a Jenkins application with LDAP authentication on virtual machines versus containers. It discusses challenges with service resources in containers and provides solutions like overriding service resources or using multi-process images with systemd to build immutable Puppet-configured application images.
This document summarizes Deepak Garg's presentation on Fabric and app deployment automation. Fabric allows defining Python functions to automate system administration and deployment tasks across multiple servers. Example functions showed provisioning VMs, installing packages, deploying code, and more. Fabric offers commands to run commands remotely, upload/download files, and decorators to define server groups and task properties. The goals of Fabric include testing infrastructure, deploying and scaling apps across identical environments, and making systems administration tasks Pythonic and automated.
Similar to A Fabric/Puppet Build/Deploy System (20)
AC Atlassian Coimbatore Session Slides( 22/06/2024)apoorva2579
This is the combined Sessions of ACE Atlassian Coimbatore event happened on 22nd June 2024
The session order is as follows:
1.AI and future of help desk by Rajesh Shanmugam
2. Harnessing the power of GenAI for your business by Siddharth
3. Fallacies of GenAI by Raju Kandaswamy
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
How Netflix Builds High Performance Applications at Global ScaleScyllaDB
We all want to build applications that are blazingly fast. We also want to scale them to users all over the world. Can the two happen together? Can users in the slowest of environments also get a fast experience? Learn how we do this at Netflix: how we understand every user's needs and preferences and build high performance applications that work for every user, every time.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
2. • Python Software engineer, not a dev-ops guy
• Long-time Fabric user, just learned puppet
• Developed this system with Gary Wilson, another python dev
who also just learned puppet
WHO BUILT THIS?
3. Start from bare RHEL 6 VMs, with only basic services pre-
installed (puppet, ntp, networking/firewall rules)
Provide tools to build, configure, and deploy:
15 existing websites in various technologies:
python, perl, php, ruby, & combinations
Mysql & Mongo databases
Memcache servers
Proxy servers
Search servers
Dev/Stage/Prod copies of all this
Automate everything
Never touch any server by hand
THE TASK
4. RHEL 6 is stable but very old versions of most software. For
example puppet hiera just became available as RPM.
Stage & Prod servers won’t have internet access
Deployment to Stage/Prod will be done by operations
people, not apps people.
Need rollback
Must have GUI or be simple
SOME CHALLENGES
5. RPM or Source Installs?
Git or Tar-based Deployment?
Chef/Puppet/Ansible/SaltStack?
Puppet preferred by our infrastructure group
We’re python devs, so Fabric seemed obvious, it’s not going
away
SOME CHOICES
6. Executes commands either local or remote (via ssh)
Has functions for many common tasks
Easy to script
Anything you can do manually by ssh to a server, you can
script fabric to do.
Goal is a repeatable, idempotent sequence of steps.
SO WHAT IS FABRIC?
7. Useful stuff it can do:
Confirm before doing things if you want
Run stuff in parallel on multiple machines, or serially
run stuff as if run from a directory
Get & put files, append to files, comment or uncomment lines
Upload templates and fill in variables
Run sudo commands
Connect to one host then to another within same function
BRIEF INTRO TO FABRIC
8. Many ways to specify
Most common is to use the Env variable, and set env.hosts
Can specify on command line
Can hardcode it (build tasks always happen on build server)
Can make lists of hosts
Functions on fab command line are executed in order, so first
function can set host, and subsequence functions can use
setting
FABRIC HOSTS
9. def tail_log(logname):
"""tail a log file.
fab hostname tail_log:access
logname is filename of log (without .log)
"""
log = env.logdir + „/‟ + logname + „.log‟
if file_exists(log):
run(„tail –f %s‟ % log, pty=True)
else:
print “Logfile not found in %s” % env.log_dir
EXAMPLE FABRIC TASK
10. def dev(service_name=None):
"""Sets server as appropriate for service_name for the dev
environment.
Also sets environ, server, and service_name in env so it is inherited
by later fab commands. Some fab commands need an environment but
are not
specific to a service (such as mysql commands), so service_name is
optional.
"""
_set_host_for_environment(service_name, Environment.DEV)
The above function is just a clever way of setting env.host to a hostname, so that later
commands in the same fab command line know what system to work on.
SETTING HOST CLEVERLY
11. We wrote classes using fabric to do all the tasks needed in
the build and deployment process for our sites (and invoke
puppet, which does the rest).
All of it is data-driven. There is a file that defines the needs
of each of our services and one that defines all of our servers.
HOW WE USED FABRIC
12. Your responsibility to make things idempotent.
For example, running “mkdir dirname” is not idempotent, because it
will fail the second time. In this case use a routine that tests
whether the dir exists, and if not then create it.
Output control
Normal output is everything (very verbose). Good for
debugging, although it can hide problems in sheer volume of
information.
You can turn down the verbosity.
Really want two levels simultaneously: less verbose output displayed
to the terminal, and fully verbose output logged to a file. But fabric
doesn’t support that yet.
CHALLENGES WITH FABRIC
13. Puppet and Fabric capabilities overlap
both can do most tasks
Puppet is naturally idempotent
Fabric is naturally step by step
Puppet: use to enforce STATE
RPM installation
Creation of upstart scripts from templates
User accounts
Files, Directories, and Permissions
Fabric: use to enforce WORKFLOW
Build software environment (python virtualenv/perl modules etc)
Protect from simultaneous deploys
Testing of support services
Sync of software environment from build server to deploy server
Checkout of git repos, switching branches
Media syncing
Run puppet
Graceful server restart
Smoke testing (is site actually working after deployment)
DIVISION OF RESPONSIBILITIES
14. All our custom puppet and fabric code is in a single git repo
called “sysconfig”
Enables everything to be run from anywhere with network
access to build server.
PUPPET/FABRIC GIT REPO
15. Graphic of our dev servers and networking
DEV ENVIRONMENT
16. Utility modules used by
multiple sites
Each site/service has
module
4 internal sites run on
intweb01. Mongo &
Mysql run on DB1Host
Intweb01(d,s,p) is internal
web server, db01 is db
server , etc
Nodes
IntWeb1Host
IntWebsite1
Nginx Proxy
IntWebsite2
Website
Layout
DB1Host
MySQL MongoDB
PUPPET MODULES
17. Nodes manifest connects hostnames with the type of host it
will be, for all servers in all environments.
Hosts manifest for each type of host (4 types of web
servers, db server, cache server, proxy server, etc). This
assigns sites to hosts.
Site manifests for each type of service (each
website, proxy, database). Does rpm installation, site-specific
files & dirs, upstart scripts to start and stop service.
Utility manifests for stuff needed by multiple sites, to
minimize duplication. For example nginx module supports 3
different uses of nginx: fake dev load balancer proxy, ssl
offloading proxy, local proxy.
PUPPET MODULES/MANIFESTS
18. Use Virtual Packages to enable every manifest to install its
dependencies without regard to whether some other manifest
has already installed it on the same server.
Should use Hiera to enable Puppet/Fabric to pull from the
same YAML database, but we haven’t done this yet since Hiera
just became available on RHEL 6.
PUPPET FEATURES
19. 1. Build step builds software environment for site on build
server
2. Deploy step copies the software environment from build
server to the destination server, then deploys app code from
scratch.
Two-step process does several things:
Speeds up deployment, since build step is needed less often and
takes a long time.
Speeds parallel deployment if you have redundant servers
Keeps compiling tools off destination servers. The less you
install, the more secure they are.
FABRIC WORKFLOW
20. Pip/Virtualenv used for Python packages, requirements file in
git repo
Cpanm used for Perl modules
Rbenv used for ruby modules
All packages, modules, and rpms mirrored locally
Improves reliability and speed
Simplifies version control
Everything (except rpms) installed in
/opt/comms/servicename, not system-wide. This simplifies
copying the environment to the deployed server, and
simplifies recreating a clean build.
FABRIC BUILD WORKFLOW
21. Example service definition
Name (for fab commands)
Server type it should be installed on (not hostname)
Domain (without dev/stg/com)
Ssl or not
How to smoke test it
Init scripts it needs
Git repos to check out, including branch, and any media
Log dirs
Languages needed
Prerequisite services to check (memcache, db)
Related services to reload (nginx, memcache)
dirs containing built software environment (virtualenv, cpanm etc)
FABRIC BUILD SERVICE DEFINITION
22. 1. Mark deployment as in progress (using lock file)
2. Check support services
If db needed, is it running?
If memcache needed, is it running?
If critical support service not running, ask whether to continue.
FABRIC DEPLOYMENT WORKFLOW
23. 3. clone/pull the git repo(s) needed for the site. Checkout the
specified branch.
FABRIC DEPLOYMENT WORKFLOW
24. 4. Move previous software environment for fast rollback
5. Rsync software environment for site
6. Rsync media for site
FABRIC DEPLOYMENT WORKFLOW
25. 7. Run puppet. For convenience we support two modes:
Use puppet master
Copy developer’s sysconfig repo and run puppet using those modules.
This makes development a lot faster.
FABRIC DEPLOYMENT WORKFLOW
26. 8. Zero Downtime Restart
9. Smoke Test
For web servers, check that site is up, login works, and run selenium
tests.
For memcache etc, use nc to test basic operation.
Note that if deployment fails, there would be some downtime until
previous version reinstated.
FABRIC DEPLOYMENT WORKFLOW
27. Setting up SSH keys on all servers
Log viewers
Database backups
Database copy from one environment to another (i.e. copy
production db back to dev)
Determine hostname from service name and environment
Status/Start/stop/reload any remote service/site
Media syncing from environment to environment
Proxy server config generation
Running puppet, using puppet master and without
Smoke testers for different types of sites & services
Tools to make local mirrors of internet software
FABRIC UTILITIES WE WROTE
28. Support for replicated (redundant) servers.
GUI for common tasks, using Rundeck or Jenkins or TeamCity
Network logging (Splunk)
FUTURE WORK