Marc embraces database virtualization and containers to help Dave's development team overcome data issues slowing their work. Virtualizing the database and creating "data pods" allows self-service access and the ability to quickly provision testing environments. This enables the team to work more efficiently and meet sprint goals. DataOps is introduced to fully integrate data into DevOps practices, removing it as a bottleneck through tools that provide versioning, automation and developer-friendly interfaces.
Best Practices in DataOps: How to Create Agile, Automated Data PipelinesEric Kavanagh
Synthesis Webcast with Eric Kavanagh and Tamr
DataOps is an emerging set of practices, processes, and technologies for building and automating data pipelines to meet business needs quickly. As these pipelines become more complex and development teams grow in size, organizations need better collaboration and development processes to govern the flow of data and code from one step of the data lifecycle to the next – from data ingestion and transformation to analysis and reporting.
DataOps is not something that can be implemented all at once or in a short period of time. DataOps is a journey that requires a cultural shift. DataOps teams continuously search for new ways to cut waste, streamline steps, automate processes, increase output, and get it right the first time. The goal is to increase agility and cycle times, while reducing data defects, giving developers and business users greater confidence in data analytic output.
This webcast examines how organizations adopt DataOps practices in the field. It will review results of an Eckerson Group survey that sheds light on the rate and scope of DataOps adoption. It will also describe case studies of organizations that have successfully implemented DataOps practices, the challenges they have encountered and benefits they’ve received.
Tune into our webcast to learn:
- User perceptions of DataOps
- The rate of DataOps adoption by industry and other demographic variables
- DataOps adoption by technique and component (i.e., agile, test automation, orchestration, continuous development/continuous integration)
- Key challenges organizations face with DataOps
- Key benefits organizations experience with DataOps
- Best practices in doing DataOps
- Case studies and anecdotes of DataOps at companies
The document discusses migrating a data warehouse to the Databricks Lakehouse Platform. It outlines why legacy data warehouses are struggling, how the Databricks Platform addresses these issues, and key considerations for modern analytics and data warehousing. The document then provides an overview of the migration methodology, approach, strategies, and key takeaways for moving to a lakehouse on Databricks.
This document is a training presentation on Databricks fundamentals and the data lakehouse concept by Dalibor Wijas from November 2022. It introduces Wijas and his experience. It then discusses what Databricks is, why it is needed, what a data lakehouse is, how Databricks enables the data lakehouse concept using Apache Spark and Delta Lake. It also covers how Databricks supports data engineering, data warehousing, and offers tools for data ingestion, transformation, pipelines and more.
DevOps is a blend of information technology and software development operations that assists businesses in creating and delivering apps quickly. DevOps brings operations and development teams together; therefore, there will be very few errors and redundancies in the software development process.
In the past few years, the term "data lake" has leaked into our lexicon. But what exactly IS a data lake? Some IT managers confuse data lakes with data warehouses. Some people think data lakes replace data warehouses. Both of these conclusions are false. Their is room in your data architecture for both data lakes and data warehouses. They both have different use cases and those use cases can be complementary.
Todd Reichmuth, Solutions Engineer with Snowflake Computing, has spent the past 18 years in the world of Data Warehousing and Big Data. He spent that time at Netezza and then later at IBM Data. Earlier in 2018 making the jump to the cloud at Snowflake Computing.
Mike Myer, Sales Director with Snowflake Computing, has spent the past 6 years in the world of Security and looking to drive awareness to better Data Warehousing and Big Data solutions available! Was previously at local tech companies FireMon and Lockpath and decided to join Snowflake due to the disruptive technology that's truly helping folks in the Big Data world on a day to day basis.
This is Part 4 of the GoldenGate series on Data Mesh - a series of webinars helping customers understand how to move off of old-fashioned monolithic data integration architecture and get ready for more agile, cost-effective, event-driven solutions. The Data Mesh is a kind of Data Fabric that emphasizes business-led data products running on event-driven streaming architectures, serverless, and microservices based platforms. These emerging solutions are essential for enterprises that run data-driven services on multi-cloud, multi-vendor ecosystems.
Join this session to get a fresh look at Data Mesh; we'll start with core architecture principles (vendor agnostic) and transition into detailed examples of how Oracle's GoldenGate platform is providing capabilities today. We will discuss essential technical characteristics of a Data Mesh solution, and the benefits that business owners can expect by moving IT in this direction. For more background on Data Mesh, Part 1, 2, and 3 are on the GoldenGate YouTube channel: https://www.youtube.com/playlist?list=PLbqmhpwYrlZJ-583p3KQGDAd6038i1ywe
Webinar Speaker: Jeff Pollock, VP Product (https://www.linkedin.com/in/jtpollock/)
Mr. Pollock is an expert technology leader for data platforms, big data, data integration and governance. Jeff has been CTO at California startups and a senior exec at Fortune 100 tech vendors. He is currently Oracle VP of Products and Cloud Services for Data Replication, Streaming Data and Database Migrations. While at IBM, he was head of all Information Integration, Replication and Governance products, and previously Jeff was an independent architect for US Defense Department, VP of Technology at Cerebra and CTO of Modulant – he has been engineering artificial intelligence based data platforms since 2001. As a business consultant, Mr. Pollock was a Head Architect at Ernst & Young’s Center for Technology Enablement. Jeff is also the author of “Semantic Web for Dummies” and "Adaptive Information,” a frequent keynote at industry conferences, author for books and industry journals, formerly a contributing member of W3C and OASIS, and an engineering instructor with UC Berkeley’s Extension for object-oriented systems, software development process and enterprise architecture.
An Architectural Deep Dive With Kubernetes And Containers Powerpoint Presenta...SlideTeam
Introducing An Architectural Deep Dive With Kubernetes And Containers PowerPoint Presentation Slides. Present the need for the containers in an organization with the help of a readily available PPT slideshow. Discuss container architecture, use cases details to make your presentation elaborative. Showcase the features, architecture, installation roadmap, and the 30-60-90 day plan in Kubernetes with the help of modern-designed PPT infographics. Familiarize your viewers with the various components of Kubernetes with the help of content-ready Kubernetes Docker PPT visuals. Make full use of high-quality icons to make your presentation attention-grabbing and meaningful. Compare and contrast Kubernetes with docker swarm based on various parameters with the help of this attention-grabbing PPT slideshow. Elaborate on Kubelet, Kubectl, and Kubeadm with the help of labeled diagrams. Showcase the networking model of Kubernetes, security measures, and the development process with this easy-to-use docker Architecture PowerPoint template. Therefore, hit the download button now to grab this amazing presentation. https://bit.ly/3vtLeFb
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
The document provides an overview of the Databricks platform, which offers a unified environment for data engineering, analytics, and AI. It describes how Databricks addresses the complexity of managing data across siloed systems by providing a single "data lakehouse" platform where all data and analytics workloads can be run. Key features highlighted include Delta Lake for ACID transactions on data lakes, auto loader for streaming data ingestion, notebooks for interactive coding, and governance tools to securely share and catalog data and models.
This document discusses data mesh, a distributed data management approach for microservices. It outlines the challenges of implementing microservice architecture including data decoupling, sharing data across domains, and data consistency. It then introduces data mesh as a solution, describing how to build the necessary infrastructure using technologies like Kubernetes and YAML to quickly deploy data pipelines and provision data across services and applications in a distributed manner. The document provides examples of how data mesh can be used to improve legacy system integration, batch processing efficiency, multi-source data aggregation, and cross-cloud/environment integration.
A successful enterprise Journey to Cloud requires more than technical execution, and we’ll help you learn what to consider, the pitfalls and how to succeed. We’ve helped many companies – in Australia and globally – execute their digital vision and accelerate change on their Journey to Cloud. We’ll share some of their experiences to help you discover how an optimised migration can transform your business.
Speakers:
Chris Fleishmann, Managing Director, Journey to Cloud Chief Architect
Attilio Di Lorenzo, Senior manager, Journey to Cloud Architect
This document summarizes a presentation on "Infrastructure as Code" for beginners. It discusses automating deployment, provisioning, environments, and virtual machine management through continuous integration/delivery practices and configuration management tools. Specific topics covered include deployment pipelines, desired state configuration, separating configuration for different environments, immutable infrastructure patterns, building golden images, and infrastructure automation through tools like Ansible, Packer and Terraform. A demo is provided to illustrate these concepts in action.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
The document provides an overview of DataOps and continuous integration/continuous delivery (CI/CD) practices for data management. It discusses:
- DevOps principles like automation, collaboration and agility can be applied to data management through a DataOps approach.
- CI/CD practices allow for data products and analytics to be developed, tested and released continuously through an automated pipeline. This includes orchestration of the data pipeline, testing, and monitoring.
- Adopting a DataOps approach with CI/CD enables faster delivery of data and analytics, more efficient and compliant data pipelines, improved productivity, and better business outcomes through data-driven decisions.
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
At wetter.com we build analytical B2B data products and heavily use Spark and AWS technologies for data processing and analytics. I explain why we moved from AWS EMR to Databricks and Delta and share our experiences from different angles like architecture, application logic and user experience. We will look how security, cluster configuration, resource consumption and workflow changed by using Databricks clusters as well as how using Delta tables simplified our application logic and data operations.
#SnowCamp2020 - DATAOPS n’est pas que DEVOPS appliqué aux projets DATAFrédéric Petit
La DATA est le “nouvel or noir” ! Mais comment aborder l’enjeu qui consiste à construire de véritables raffineries, accélérant et fluidifiant le traitement des données de l’entreprise, pour produire des services innovants ? Venez découvrir le processus de fabrication d’un projet bâti sur la donnée !
En s’appuyant sur ses parents naturels que sont l’ Agilité et la culture #DevOps, nous découvrirons les principes fondamentaux de la pratique #DataOps.
Editeurs de logiciels, innovez avec l'Intelligence ArtificielleGuillaume Renaud
Les opportunités de marché pour les éditeurs
La plateforme IA Microsoft
Retours d’experiences AB Tasty & Damdy
Comment capitaliser sur l’IA dans vos solutions
Le monde de l'informatique est divisé depuis toujours en deux univers : les personnes qui créent (Dev) et celles qui exploitent en production (Ops). Cette séparation peut générer stress et frustration. Les équipes n'ont pas l'impression d'aller dans le même sens et cela nuit à la productivité. Pour les réconcilier, un ensemble de pratiques et d'outils ont été imaginées: elles se cachent derrière le terme DevOps. Qu'est-ce que c'est exactement ? Quels problèmes est-ce que cela résout ? Quelle est la bonne approche pour le mettre en place? Nous vous proposons de découvrir notre vision sur ce sujet lors de cette session d'introduction.
De plus en plus d’initiatives Data Science sont rendues possibles grâce à la mise en place de DataLakes. Les modèles de Machine Learning inclus dans ces projets sont, comme toute application, sujets à évolutions. Le suivi de ces évolutions peut-il être automatisé ? Les standards de déploiement pour ces applications sont-ils toujours de rigueur ?
Lors de ce slot nous présenterons différentes pistes pour réconcilier Continuous Delivery et Machine Learning.
Joseph Glorieux & Mathieu Brun Maintenant que mon delivery pipeline est en pl...matteo mazzeri
On réduit souvent DevOps au déploiement d’outils permettant le provisionning automatique de l’infrastructure et le déploiement rapide de nouvelles fonctionnalités. Cependant, ces nouvelles possibilités, couplées à l’introduction de nouvelles pratiques comme l’IaC, permettent de donner de l’ampleur à notre transformation DevOps.
Mais voilà, après quelques mois, la fréquence des livraisons est en chute libre et la tension entre les équipes s’est accrue : les Ops grognent à cause des 5 derniers déploiements ratés, et les Devs taclent les Ops pour leur ignorance de l’architecture microservice.
Une vague impression de déjà-vu peut-être… Et si nous avions oublié quelque chose…
Dans cette session, nous vous proposons de partager les modèles d’organisation mis en place ou observés ces dernières années au sein des entreprises pour lesquelles nous avons travaillé. Si tout ne s’est pas toujours déroulé comme prévu, ces changements organisationnels restent le catalyseur nécessaire pour faciliter la transformation DevOps et la pérenniser.
Lors du PaaS Tour de France, j'ai co-organisé et réalisé un talk sous forme de REX sur Hager
Vincent Thavonekham Regional Director
AZUG FR-MUG Lyon
VISEO
Petit déjeuner Octo - L'infra au service de ses projetsAdrien Blind
Cette présentation revient sur le projet d'automatisation de l'infrastructure informatique de Société Générale, dans un contexte plus large de déploiement des pratiques et outils du continuous delivery et devops.
Ma stack d'outils agiles, tout un programme !Cédric Leblond
Pour le développement, nous utilisons tous des outils. Leur nombre et surtout leur intégration peuvent même devenir un vrai casse tête. Surtout s'il vous faut supporter des technologies parfois très distinctes ... Je vous propose de monter une plateforme entièrement intégrée et flexible avec Visual Studio Online. Intégrée car toutes les données y sont disponibles. Flexible car ses API permettent de l'étendre avec vos outils agiles préférés (Trello, Zendesk, Jenkins, Jira, ...) et de l’adapter ainsi à vos besoins
IDC Observatoire 2020 de l'Automatisation des Métiers: vers l'Intelligent Pro...Bonitasoft
En 2020, améliorer l’efficacité des collaborateurs et la performance des entreprises se fait nécessairement à travers la transformation digitale des entreprises. En Europe, 65% des CEO déclarent être concernés par le sujet et attendre des résultats de leurs équipes.
Avec l’Observatoire de l’Automatisation des Métiers, IDC fait un état des lieux de la progression du sujet au sein des DSI françaises:
- Quelle est l’ampleur et l’avancée de la digitalisation des processus ?
- Quel est l’impact opérationnel de cette digitalisation sur les organisations et sur leurs clients?
- Quelles sont les stratégies développées et mises en oeuvre par les grands groupes?
- Quelle collaboration pour les DSI avec les métiers ou la direction générale ?
- BPM, RPA, ACM et IA: quel est l’outillage technologique déployé par les équipes IT?
Mêlant automatisation, robotisation et optimisation tout en tirant parti des nouveaux paradigmes logiciels (API, connectivité et extensibilité des systèmes, cloud computing), IDC dresse à travers son analyse le portrait de l’Intelligent Process Automation
video: https://youtu.be/Lka0T-zfvlc
Ce mardi 18 avril nous avons aborder le thème suivant:
" Quels sont les défis de l’organisation face aux outils de déploiement continu (DevOps) et aux pratiques agiles (Scrum et SAFe)? ".
Microsoft nous a présentersa stratégie ainsi qu'un cas d’usage concret, pour comprendre comment le développeur publie en temps réel dans un store.
Similaire à DataOps introduction : DataOps is not only DevOps applied to data! (20)
Ansible, Terraform, CloudFormation, [insert your favorite tech here]… Les solutions d’infra-as-code sont pléthores. Alors, pourquoi parler du dernier rejeton à la mode porté par le CNCF ? Allez, spoilons un peu l'affaire ! Bâti sur Kubernetes, Crossplane permet lui de faire converger le delivery d’une app containerisée avec toutes les autres ressources requises hors de votre cluster K8S préféré, et dont elle aura toutefois grand besoin pour fonctionner correctement : un bucket S3, une base de donnée managée, etc.. Vous orchestrez ainsi le cycle de vie de votre application complète avec une seule et même perspective. Ajoutez à cela un multicloud facilité, ou encore une vrai capacité à s’inscrire dans une démarche GitOps, et vous obtenez là une solution très efficace pour organiser vos prochains déploiements !
This presentation explains what serverless is all about, explaining the context from Devs & Ops points of view, and presenting the various ways to achieve serverless (Functions a as Service, BaaS....). It also presents the various competitors on the market and demo one of them, openfaas. Finally, it enlarges the pictures, positionning serverless, combined with Edge computing & IoT, as a valuable triptic cloud vendors are leveraging on top of, to create end-to-end offers.
2 self-managed Docker clusters deployed on public clouds and fight each other in a ruthless battle. One has been designed to resist any form of threat. The other one's only aim is to destroy the first one. Who's going to win?
Although it's presented as an entertainment, this talk will show off two serious platforms leveraging on different principles. Beyond the technical aspects covered (swarm/kubernetes orchestration, IaaS clouds, various tools such as terraform, kops or helm) , it will be the opportunity to discuss more largely architecture topics such as immutable infrastructure, hybridation, microservices, etc.
DevOps at scale: what we did, what we learned at Societe GeneraleAdrien Blind
The following talk discusses Societe Generale's transformation journey to DevOps, and more largelly to continuous delivery principles, inside a large, traditionnal company. It emphases the importance of practices over tooling, a human centric approach massively leveraging on coaching, and our "framework" approach to make it scaling up to the IS level.
It has been initially delivered at DevOps Rex conference, with teammate Laurent Dussault, also DevOps coach at Societe Generale.
Unleash software architecture leveraging on dockerAdrien Blind
The following talk first comes back on key aspects of microservices architectures. It then shifts to Docker, to explain in this context the benefits of containers and especially the new orchestration features appeared with version 1.12.
Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]Adrien Blind
The following talk discusses the opportunity to leverage on docker to create an hybrid logical cloud built simultaneously on top of traditionnal datacenters and public cloud vendors and enabling to manage new kind of containers (Windows, linux over ARM). It also discusses the value of such capacity for applications in a contexte of topology orchestrations and micro service oriented applications.
DevOps à l'échelle: ce que l'on a fait, ce que l'on a appris chez Societe Gen...Adrien Blind
The following talk discusses Societe Generale's transformation journey to DevOps, and more largelly to continuous delivery principles, inside a large, traditionnal company. It emphases the importance of practices over tooling, a human centric approach massively leveraging on coaching, and our "framework" approach to make it scaling up to the IS level.
It has been initially delivered at DevOps Rex conference, with teammate Laurent Dussault, also DevOps coach at Societe Generale.
Docker, cornerstone of an hybrid cloud?Adrien Blind
In this presentation, I propose to explore the orchestration & hybridation potential raised by Docker 1.12 Swarm Mode and the subsequent benefits.
I'll first remind why docker fits well the microservices paradigms, and how does this architecture engender new challenges : service discovery, app-centric security, scalability & resilience, and of course, orchestration.
I'll then discuss the opportunity to create your own docker CaaS platform hybridating simultaneously on various cloud vendors & traditional datacenters, better than just leveraging on vendors integrated offers.
Finally, I'll discuss the rise of new technologies (Windows containers, ARM architectures) in the docker landscape, and the opportunity of integrating them in a global docker composite orchestration, enabling to depict globally complex apps.
Since many apps are not about just a single container, this talk discusses the ability and benefits of creating an hybrid Docker cluster capacity leveraging on Linux+Windows OS and x86+ARM architectures.
Moreover, the docker nodes composing this cloud will be hosted across several providers (local DC, cloud vendors such as Azure or AWS), in order to face various scenarios (cloud migration, elasticity...).
DevOps, NoOps, everything-as-code, commoditisation… Quel futur pour les ops ?Adrien Blind
La mise en oeuvre du continuous delivery engendre de nouvelles pressions sur les Ops, l’infra et l’opérabilité d’une application se bâtissant désormais au rythme croissant des itérations livrées. En parallèle, les patterns d’architecture évoluent eux aussi : résilience et scalabilité se traitent désormais de plus en plus au sein même des applications, ramenant progressivement l’infrastructure au rang de commodité… Enfin, les équipes de Devs n’ont de cesse de réclamer plus d’autonomie et une ergonomie plus adaptée à leurs besoins : les acteurs du cloud et de solutions star comme Docker ne s’y sont pas trompés en proposant des produits qui leur parlent directement : la tentation du NoOps grandit peu à peu…
L’enjeu pour les Ops consiste donc à proposer un positionnement et une offre en résonance avec ces nouvelles attentes. Les challenges sont nombreux, revêtant à la fois des aspects techniques (infra-as-code, software-defined-software/storage/, hybridation du SI…) et non techniques (agilité, craftsmanship, devops…).
Des Devs s’arrogeant la place des Ops, des Ops acquérant des compétence de Dev… Dans cette session, nous vous proposons ainsi d’explorer ces profondes mutations culturelles et techniques, et nous vous partagerons quelques recettes pour le plus grand bénéfice des OPs… comme des DEVs. Comme l’écrivait Audiard, « Quand ça change, ça change... Faut jamais se laisser démonter » !
Introduction to Unikernels at first Paris Unikernels meetupAdrien Blind
This is an introduction to unikernels and their impact on architecture and IT organizations (in French, I'll translate it in short terms). I produced this talk for the first Paris Unikernels Meetup.
When Docker Engine 1.12 features unleashes software architectureAdrien Blind
This slidedeck deals with new features delivered with Docker Engine 1.12, in a larger context of application architecture & security. It has been presented at Voxxed Days Luxembourg 2016
The document discusses full stack automation and DevOps. It introduces Clément Cunin and Adrien Blind and their roles. Some key benefits discussed are reduced time to market, repeatability, and serenity. Methods discussed include deploying new releases daily with a 15 minute commit to production time, treating infrastructure as code, using ephemeral environments, and measuring everything.
This presentation discusses how to achieve continuous delivery, leveraging on docker containers, here used as universal application artifacts. It has been presented at Voxxed '15 Bucharest.
Docker: Redistributing DevOps cards, on the way to PaaSAdrien Blind
This talk first presents Docker through its key characteristics: being Portable, Disposable, Live, Social. It then discusses a new type of cloud, the CaaS (Container as a Service), and it potential benefits for PaaS (Platform as a Service).
Docker, Pierre angulaire du continuous delivery ?Adrien Blind
This presentation explores continuous delivery principles leveraging on Docker : it depicts the use of Docker containers as universal application artifacts, delivered flowly all along a deployment pipeline.
This slideshow has been initially presented at Devops D-Day conference, Marseille.
Identity & Access Management in the cloudAdrien Blind
This presentation discusses the evolution of IAM (Identity & Access Management) problematic, considering a context pushing more & more externalization & opening (B2B, B2C) of enterprises IS, also leveraging massively on the cloud.
The talk particularly focuses on IAM SSO & federation topics, and subsequent technologies (SAML, OpenID, OAuth...).
The missing piece : when Docker networking and services finally unleashes so...Adrien Blind
Docker now provides several building blocks, combining engine, clustering, and componentization, while the new networking and service features enable many new usecases such as multi-tenancy. In this session, you will first discover the new experimental networking and service features expected soon, and then drift rapidly to software architecture, explaining how a complete Docker stack unleashes microservices paradigms.
The first part of the talk will introduce what SDNs and service registries are to the audience and will cover corresponding network & service experimental features of docker accordingly, with a technical focus. For instance, it explains how to create an overlay network of top of a swarm cluster or how to publish services.
The second part of the talk moves from infrastructure to application concerns, explaining that application architecture paradigms are shifting. In particular, we discuss the growing porosity of companies’s IS (especially due to massive use of cloud services) drifting security boundaries from the global IS perimeter, to the application shape. We also remind that traditional SOA patterns leveraging on buses (ie. ESBs & ETLs) are being replaced by microservices promoting more direct, full-mesh, interactions. To get the picture really complete, we’ll also rapidely remind other trends and shifts which are already covered by other docker components: scalability & resiliency to be supported by the apps themselves, fine-grained applications, or even infrastructure commoditization…
Most of all, the last part depicts a concrete, state-of-the-art application, applying all the properties discussed previously, and leveraging on a multi-tenant docker full stack using new networking and services features, in addition to traditional swarm, compose, and engine components. And just because we say it doesn’t mean it’s true, we’ll be happy to demonstrate this live !
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms.
As a bonus, OpenStack Neutron internal design is presented.
You can also have a look on our previous presentation related to enterprise patterns for Docker:
http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
Vision de Claude 3.5 SONNET Comment utiliser la vision Utilisez les capacités...Erol GIRAUDY
Vision de Claude 3.5 SONNET
La famille de modèles Claude 3 est dotée de nouvelles capacités de vision qui permettent à Claude de comprendre et d’analyser des images, ouvrant ainsi des possibilités passionnantes pour l’interaction multimodale.
https://www.ugaia.eu/2024/07/claude-35.html
Ce guide décrit comment utiliser des images dans Claude, y compris les meilleures pratiques, les exemples de code et les limitations à garder à l’esprit.
Comment utiliser la vision
Utilisez les capacités de vision de Claude via :
• claude.ai. Téléchargez une image comme vous le feriez pour un fichier, ou faites glisser et déposez une image directement dans la fenêtre de discussion.
• L’établi de la console. Si vous sélectionnez un modèle qui accepte les images (modèles Claude 3 uniquement), un bouton pour ajouter des images apparaît en haut à droite de chaque bloc de message Utilisateur.
Demande d’API. Voir les exemples dans ce guide.
Tutoriel interactif d’ingénierie rapide d’Anthropic.pdfErol GIRAUDY
Le tutoriel interactif d’ingénierie rapide d’Anthropic. Ce cours est destiné à vous fournir une compréhension complète, étape par étape, de la façon de concevoir des invites optimales dans Claude.
Après avoir terminé ce cours, vous serez en mesure de :
Maîtriser la structure de base d’une bonne invite
Reconnaître les modes de défaillance courants et apprendre les techniques « 80/20 » pour y remédier
Comprendre les forces et les faiblesses de Claude
Créez des invites puissantes à partir de zéro pour les cas d’utilisation courants
Ce tutoriel existe également sur Google Sheets en utilisant l’extension Claude for Sheets d’Anthropic. Nous vous recommandons d’utiliser cette version car elle est plus conviviale.
Lorsque vous êtes prêt à commencer, allez à pour continuer.01_Basic Prompt Structure
Retrouvez toute la communauté Liferay francophone pour un meetup virtuel (100% remote) pendant la pause déjeuner du jeudi 4 juillet.
Ce meetup sera l'occasion de vous présenter 5 sujets auxquels consacrer un peu de veille technique entre deux siestes sur la plage cet été.
Pour chaque sujet on vous fait un petit résumé, on en discute ensemble et bien sur on vous donne tous les pointeurs utiles pour vous occuper un peu les jours de pluie cet été (rares bien entendu).
Au programme : HTMX, Alpine.js, animation.css, N8N, Sentry, GlitchTip
Et bien sur les traditionnels échanges libres ne seront pas oubliés !
CLAUDE 3.5 SONNET EXPLICATIONS sur les usagesErol GIRAUDY
Présentation de Claude 3.5 Sonnet
La famille de modèles Claude 3 est dotée de nouvelles capacités de vision qui permettent à Claude de comprendre et d’analyser des images, ouvrant ainsi des possibilités passionnantes pour l’interaction multimodale.
Les artefacts : une nouvelle façon d’utiliser Claude
voir aussi sur mon blog :
www.ugaia.eu
2. Frédéric PETIT
Head Of Architecture
Head Of Data Department
MNT-VYV
Adrien BLIND
DataOps Evangelist
Docker Captain
SAAGIE
@madgicweb @AdrienBlind
@mutuelleMNT
@Groupe_VYV
@saagie_io
3. Kevin KELLY
“ The business plans
of the next 10.000
startups are easy to
forecast :
Take X and add AI”
Photo : @USIEvents
Source : ici
Rédacteur en chef et fondateur du magazine “Wired”
4. Approche de programmation “Produit Traditionnel”
Compute
Source
Code
Data
Output
FeedBack
Approche “Traditionnelle”
5. Approche de programmation d’un “Produit intelligent”
Compute
Source
Code
Data
Output
FeedBack
Compute
Training
Code
Labeled
Data
Model(s)
FeedBack
Approche “Machine Learning” Approche “Traditionnelle”
6. La “DATA” est la matière première du système !
Compute
Source
Code
Output
FeedBack
Compute
Training
Code
Labeled
Data
Model(s)
FeedBack
Data Factory
Data
Sources
Back-End
Data
Extraction
APIs
FeedBack
Data VIZRequest
Analytics
Consumer
Data
Code
DashboardsDashBoard
Code
7. SI Data vs SI Opérationnel
Compute
Source
Code
Output
FeedBack
Compute
Training
Code
Labeled
Data
Model(s)
FeedBack
Data Factory
Data
Sources
Back-End
Data
Extraction
APIs
FeedBack
Data VIZRequest
Analytics
Consumer
Data
Code
DashboardsDashBoard
Code
SI Data SI Opérationnel
12. Les pipelines associés
Develop Build Test ReleaseNeeds Deploy
APPLICATIO
N
Operated
Develop Training Test Evaluate
Extract Prepare Analyse Storage
Release
Publish
DATA
Exposed
MODELS
Optimized
Driven by :
Intelligence
Data
Capital
13. Le sujet, ce n’est pas le datalake, c’est le data PROCESSING
Datamarts,
Shared
Dataset(s)
Data processing
Consumers
Si la data est le nouvel or noir, alors :
● Vos datalakes sont vos nappes de pétrole, votre capital (grandes masses de données brutes)
● Hive/Impala & co. sont vos puits de pétrole, permettant de requêter la ressource
● Mais ce sont en fait les orchestrateurs, vos raffineries permettant de transformer le
capital en valeur d’usage
DATALAKE
Data storing: datalakes, object storage, ...
Extract Prepare Analyse Publish
15. Les pipelines associés
Develop Build Test ReleaseNeeds Deploy
APPLICATIO
N
Operated
Develop Training Test Evaluate
Extract Prepare Analyse Storage
Release
Publish
DATA
Exposed
MODELS
Optimized
Driven by :
Intelligence
Data
Capital
MLOps
DevOps
DataOps
Innovation Pipeline
Value Pipeline
Analysis Pipeline
16. Quelle culture holistique regroupe toutes ces initiatives ?
Develop Build Test ReleaseNeeds Deploy
Applicatio
n
Operated
Develop Training Test Evaluate
Extract Prepare Analyse Storage
Analys
is
ReleaseValue
Publish
Dataset
Exposed
Models
Optimized
Driven by :
MLOps
DevOps
DataOps
Innovation Pipeline
Value Pipeline
Analytics Pipeline
Faites vos propositions !
http://bit.ly/36eqHqL
17. On s’égare , revenons au DataOps !
Abordons le sujet de
DataOps en partant d’un
postulat :
Vous avez d’ores et déjà
mis en place la culture
DevOps dans votre
entreprise :)
Source : Giphy @ Snuls
18. Domaines de compétences techniques nécessaires
Develop Build Test ReleaseNeeds Deploy
APPLICATIO
N
Operated
Develop Training Test Evaluate
Extract Prepare Analyse Storage
Release
Publish
DATA
Exposed
MODELS
Optimized
Driven by :
Intelligence
Data
Capital
DataOpsMLOpsDevOps
19. Domaines de compétences opérationnelles nécessaires
DevOps
DEV OPS
BIZ
DataOps
DEV
OPSBIZ
Data
Scientist
Data
Engineer
20. Organisation de la culture DevOps
Tribe
Squad Squad Squad
Chapter Dev
Chapter Ops
21. Organisation de la culture DataOps qui semblerait naturelle
Tribe
Squad Squad Squad
Chapter Dev
Chapter ...
Chapter DATA
22. Organisation observée de la culture DataOps
DATA LABS
Tribe
Squad Squad Squad
Chapter Dev
Chapter ...
Squad
Data
Oriented
Squad
Data
Oriented
23. Organisation de la culture DataOps “post-maturation”
Tribe
Squad Squad Squad
Chapter Dev
Chapter ...
Chapter DATA
DATA LABS
Squad
Data
Oriented
24. Data Factory
CentralizedDistributed
Data Dictionnary
Data Extraction /
Lineage
Data Catalog
Data Exposition Data Processing
Data WareHouse /
Data Lake
Data Collection
Data Exploration &
Analysis tools
Data Viz
ML Code
ML Trainning
(Model)
Monitoring
Data Viz
Data Verification
Data Quality
Gouvernance /
Security
Modelization
Service
Presentation