This document discusses operationalizing machine learning with Splunk. It begins with an overview of machine learning and the challenges of applying it to real-time data. It then provides examples of machine learning use cases in IT operations, security, and customer analytics. The document outlines the machine learning process of getting data, exploring it, fitting and validating models, predicting outcomes, and operationalizing results. It highlights machine learning capabilities in Splunk products like the ML Toolkit, UBA, and ITSI and provides next steps for audiences to learn more.
Machine Learning and Analytics Breakout SessionSplunk
The document provides an overview of operationalizing machine learning with Splunk. It discusses how machine learning can help with challenges around data being in constant motion and the need to make real-time decisions using all available historical and real-time data. The document then covers machine learning concepts, different types of machine learning, examples of use cases in IT operations, security, and business analytics. It outlines the typical machine learning process of getting data, exploring it, fitting and applying models, and validating and operationalizing models. Finally, it discusses how machine learning can be done with Splunk through its machine learning toolkit and apps.
Explain the Value of your Splunk Deployment Breakout SessionSplunk
This document provides best practices for documenting value realization from a Splunk deployment. It recommends aligning Splunk use with key organizational objectives. Steps include identifying current success stories, quantifying benefits realized using key metrics, and outlining additional value that can be achieved. Metrics to track include reduced incidents, faster issue resolution, and improved efficiencies. Adoption curves and staff training plans should be defined to fully realize potential value. The document aims to help customers justify further Splunk investment and expansion.
This document discusses DevOps concepts and how Splunk can be used to power DevOps initiatives. It defines key DevOps terms like continuous deployment, continuous delivery, push vs. pull deployments. It also outlines how Splunk provides visibility across the application development lifecycle from coding to testing to production. Example use cases are presented that leverage Splunk data and analytics to improve developer productivity, deployment health, and operational efficiency. The document promotes transforming organizations to DevOps using Splunk to provide a unified platform for data-driven insights.
Get your Service Intelligence off to a Flying StartSplunk
The document provides guidance to customers on getting started with Splunk IT Service Intelligence. It recommends bringing subject experts together to identify a problem worth solving, such as issues impacting critical business services. It also suggests designing service models before configuring tools to help map business, application, and infrastructure layers and define key performance indicators. The document offers to help customers with workshops, assessments, and best practices to maximize their investment in Splunk IT Service Intelligence.
The document summarizes Splunk Enterprise 6.3, highlighting key new features and capabilities. It discusses breakthrough performance and scale improvements including doubled search and indexing speed and 20-50% increased capacity. It also covers advanced analysis and visualization features like anomaly detection, geospatial mapping, and single-value display. New capabilities for high-volume event collection and an enterprise-scale platform with expanded management, custom alert actions, and data integrity control are also summarized.
Getting Started with Splunk Enterprise Hands-OnSplunk
The document provides an overview of Splunk Enterprise and how it can be used to analyze machine data. It discusses concepts like big data, machine data, and how Splunk allows users to index data from various sources and ask questions of that data through a simple interface. Specific commands are also shown that demonstrate searching tutorial machine data to extract fields, visualize results through charts, and save searches and dashboards.
Machine Learning and Analytics Breakout SessionSplunk
This document provides an overview of machine learning and how it can be used with Splunk. It discusses what machine learning is, the different types of machine learning, and common use cases in IT operations, security, and business analytics. It also summarizes how machine learning can be implemented using Splunk, including exploring data, building models, applying and validating models, and operationalizing models. The document encourages attendees to try out the free Splunk Machine Learning Toolkit and Showcase app.
How to Design, Build and Map IT and Biz Services Breakout SessionSplunk
This document provides information on how to design and implement service intelligence using Splunk. It discusses identifying critical business services and issues to focus on, bringing subject matter experts together in a collaborative workshop to design service models before configuration. The workshop uses a case study of the gaming company Buttercup Games to design a supply chain service intelligence model, mapping key performance indicators and services across infrastructure, application, and business layers to gain visibility into issues impacting customer experience and revenue. Attendees are encouraged to sign up for a similar joint workshop with Splunk to start unlocking the value of their machine data.
Learn How to Design, Build and Map Services to Quantifiable Measurements in S...Splunk
This document provides an agenda for a webinar on designing, building, and mapping IT and business services in Splunk. The webinar will discuss the methodology and value of service design and mapping, how to derive "service intelligence", an introduction to Splunk IT Service Intelligence, and a demo of Splunk ITSI Glass Tables. It includes speakers, a safe harbor statement, and information on a next webinar in the series on accelerating troubleshooting with interactive visualizations.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand.
This document discusses how Splunk can be used for DevOps. It defines DevOps as integrating development and operations. It then discusses some common DevOps metrics like culture, process, quality, systems, activity, and impact metrics. It explains that machine data from across the development lifecycle and IT operations is a critical source of DevOps metrics. The document provides examples of how Splunk can provide visibility and collect machine data from various parts of the development and operations environments, like code review, version control, CI/build servers, testing, releases, and infrastructure systems. It discusses how Splunk can be used to increase delivery velocity, improve code quality, and enable data-driven continuous delivery for DevOps teams.
Stela Udovicic, Product Marketing, Splunk presentation regarding driven application delivery with machine data insights. Presented at DevOpsDays Vancouver: April, 2016.
Splunk Tutorial for Beginners - What is Splunk | EdurekaEdureka!
The document discusses Splunk, a software platform used for searching, analyzing, and visualizing machine-generated data. It provides an example use case of Domino's Pizza using Splunk to gain insights from data from various systems like mobile orders, website orders, and offline orders. This helped Domino's track the impact of various promotions, compare performance metrics, and analyze factors like payment methods. The document also outlines Splunk's components like forwarders, indexers, and search heads and how they allow users to index, store, search and visualize data.
This document provides an overview and demonstration of Splunk Enterprise. The agenda includes an overview of Splunk, a live demonstration of installing and using Splunk to search, analyze and visualize machine data, a discussion of Splunk deployment architectures, and information on Splunk communities and support resources. The demonstration walks through importing sample data, performing searches, creating a field extraction, building a dashboard, and exploring Splunk's alerting, analytics and pivot interface capabilities.
Splunk: How to Design, Build and Map IT ServicesSplunk
This document discusses how to design, build, and map IT and business services in Splunk to gain "service intelligence." It describes a methodology for bringing subject matter experts together to design services top-down before configuration. Specifically, it discusses deconstructing a company's supply chain, online store, and ERP systems into a service map to gain insights on key performance indicators and improve issue resolution, efficiency, and customer satisfaction.
Taking Splunk to the Next Level - ArchitectureSplunk
This session led by Michael Donnelly will teach you how to take your Splunk deployment to the next level. Learn about Splunk high availability architectures with Splunk Search Head Clustering and Index Replication. Additionally, learn how to manage your deployment with Splunk’s operational and management controls to manage Splunk capacity and end user experience
This document discusses how Herbalife, a company that produces health and wellness products, uses Splunk to monitor their global ecommerce website and applications. It describes how Splunk has improved their operational visibility and issue resolution by enabling logging of web, SQL, application, and development data across their four data centers. Splunk has helped them scale from 10GB to 50GB of data in six months, improve mean time to resolution from days to minutes, and support over 250 users accessing logs and metrics.
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...Splunk
IT organizations are increasingly using machine data - including in DevOps practices - to get away from 'vanity metrics' and instead to generate 'metrics that matter'. These metrics provide visibility into the delivery of new application code and the business value of DevOps, to both IT and business stakeholders.
Machine data provides DevOps teams and others - including QA, secops, CxOs and LOB leaders - with meaningful and actionable metrics. This allows stakeholders to monitor, measure, and continuously improve the velocity and quality of code throughout the software lifecycle, from dev/test to customer-facing outcomes and business impact.
In this session Andi Mann, chief technology advocate at Splunk, will share core methodologies, interesting case studies, key success factors and 'gotcha' moments from real-world experience with mining machine data to produce 'metrics that matter' in a DevOps context.
Splunk can help customers document business value by providing deliverables like business cases, value realization studies, and adoption roadmaps. It has helped over 700 customers worldwide since 2013. Key value drivers reported by customers include IT operations, application delivery, security, and compliance. Common challenges to documenting value include lack of tools, benchmarks, and time. The document outlines best practices for positioning value at Splunk, including quantifying business value, qualifying pain points, aligning with objectives, and measuring success. It provides examples of value drivers achieved in areas like infrastructure optimization, revenue growth, and risk reduction.
ARTIFICIAL INTELLIGENCE: The Future of Business. Diego Saenz
This document summarizes an artificial intelligence presentation about how AI will help businesses grow. It discusses thought leaders saying AI will be highly disruptive. Over $6 billion has been invested in over 1,000 AI startups focusing on machine learning, computer vision, and other areas. The presentation uses the healthcare industry as a case study, noting big players making big bets in AI and how startups will transform industries. It encourages attendees to study funded startups, identify problems AI could solve, test solutions, find unused data sources, build networks, and learn lean startup principles to take action applying AI in their own industries and businesses.
Don’t bid farewell to your insurance agent just yet. NTT DATA Consulting provides a reality check on AI’s transformative impact on the insurance industry. Download the full report “The AI Revolution in Insurance: A Reality Check” on the NTT DATA website.
Disrupting Internal Processes with Artificial Intelligence APIsIBM Watson
Get a high-level overview of Watson Developer Cloud, including a deep dive into the four GTM models, different pricing scenarios and an overview of the tools you have to support your choice.
How Insurers Can Harness Artificial IntelligenceCognizant
Once science fiction, artificial intelligence now holds vast potential for insurers interested in reinventing their business models and transforming customer experience.
A business level introduction to Artificial Intelligence - Louis Dorard @ PAP...PAPIs.io
This document provides an overview of artificial intelligence and machine learning. It discusses how machine learning works using data and examples to build intelligence. Examples of everyday and business uses of machine learning are presented, such as predicting property prices, email spam detection, and demand forecasting. The document outlines the types of analytics that can be performed, from descriptive to predictive to prescriptive. It also discusses how machine learning models are developed and deployed through predictive APIs.
This document discusses artificial intelligence and its commercial applications. It defines AI as using computer science, biology, psychology and other fields to develop computers that can think and act intelligently like humans. It then discusses several commercial applications of AI including decision support systems, information retrieval systems, virtual reality, and robotics. The document also provides overviews of expert systems, which use knowledge bases to solve problems like human experts, and fuzzy logic systems, which allow for approximate reasoning similar to human reasoning.
A brief history of artificial intelligence for businessJack C Crawford
Since the 1960s, Artificial Intelligence has promised us benefits in business and in our personal lives. This presentation takes us from the early days up to machine learning and applications for enterprise businesses that are delivering personalized experiences to customers ... to a "segment of one."
How large-scale image analytics (near-real time analysis of satellite images, machine learning) could help (re-)insurer anticipate natural catastrophes and estimate damages more precisely
Transform your Business with AI, Deep Learning and Machine LearningSri Ambati
Video: https://www.youtube.com/watch?v=R3IXd1iwqjc
Meetup: http://www.meetup.com/SF-Bay-ACM/events/231709894/
In this talk, Arno Candel presents a brief history of AI and how Deep Learning and Machine Learning techniques are transforming our everyday lives. Arno will introduce H2O, a scalable open-source machine learning platform, and show live demos on how to train sophisticated machine learning models on large distributed datasets. He will show how data scientists and application developers can use the Flow GUI, R, Python, Java, Scala, JavaScript and JSON to build smarter applications, and how to take them to production. He will present customer use cases from verticals including insurance, fraud, churn, fintech, and marketing.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
The document provides an overview of deep learning, including its history, key concepts, applications, and recent advances. It discusses the evolution of deep learning techniques like convolutional neural networks, recurrent neural networks, generative adversarial networks, and their applications in computer vision, natural language processing, and games. Examples include deep learning for image recognition, generation, segmentation, captioning, and more.
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms, which are exposed via an API and demonstrated in a showcase. In this session, we'll present an overview of the app architecture and API and then show you how to use Splunk to easily perform a wide variety of tasks, including outlier detection, predictive analytics, event clustering, and anomaly detection. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
Machine Learning and Analytics Breakout SessionSplunk
This document provides an overview of operationalizing machine learning with Splunk. It discusses how machine learning can be used to analyze historical and real-time data to make predictions. Common use cases for machine learning in IT operations, security, and business analytics are described. The document outlines the machine learning process of exploring data, fitting models, applying and validating models, and operationalizing predictions. It promotes Splunk's machine learning toolkit and app for building machine learning workflows and models within Splunk.
This document provides an overview of operationalizing machine learning with Splunk. It discusses why machine learning is needed given that data is constantly changing. It then defines machine learning and describes the typical workflow of exploring data, fitting models, applying models in production, and continuously validating models. Examples of using machine learning for IT operations, security, and business analytics are presented. The document concludes by describing how Splunk's machine learning toolkit and apps can be used to operationalize machine learning models.
This document provides an overview of operationalizing machine learning with Splunk. It discusses why machine learning is needed given that data is constantly changing. It then defines machine learning and describes the typical workflow of exploring data, fitting models, applying models in production, and continuously validating models. Examples of use cases for IT operations, security, and business analytics are presented. The document concludes by describing how Splunk's machine learning toolkit and apps can be used to operationalize machine learning models.
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
The document discusses machine learning and analytics capabilities in Splunk. It provides an overview of machine learning concepts like supervised vs. unsupervised learning. It then introduces the ML Toolkit and Showcase App, which adds machine learning commands to the Splunk Programming Language. The app uses popular Python machine learning libraries behind the scenes. The document demonstrates how to fit and apply models to data in Splunk using these new commands. It also outlines some limitations and future plans for the preview release app. Example use cases for predictive modeling in areas like capacity planning, insider threat detection, and customer churn prediction are presented.
Machine Learning & IT Service Intelligence for the Enterprise: The Future is ...Precisely
Enterprises with mainframes and Cloud/server architectures face unique issues and challenges and if your enterprise delivers a service whose operation spans mainframe and distributed and/or Cloud infrastructures (e.g. a mobile banking/customer app), this webinar is for you.
See how you can gain unique business and service-relevant context using your own machine data, including that from your z/OS mainframe. Implicitly learn patterns, eliminate costly false alerts, identify anomalies, and baseline normal operations by employing advanced analytics driven by machine learning. You’ll also see and learn about:
• Accelerating root-cause analysis and getting ahead of customer-impacting outages and slow-downs for your service
• “Glass Table” view for clickable visualization of the entire service-relevant infrastructure
• Machine Learning in IT Service Intelligence
• The Machine Learning Toolkit available today
The document discusses machine learning and big data research at the Data Science Institute of Multimedia University. The institute conducts research across various domains using machine learning techniques. Some areas of research include high performance computing for massive data sources, social media analytics, smart cities, and public health analytics. The document provides examples of how machine learning can be applied to problems in business analytics like predictive customer churn analysis and operations analytics like predictive maintenance. It also outlines the basic machine learning process of obtaining data, exploring it, building predictive models, applying and validating models, and taking action based on forecasts.
SplunkLive! Frankfurt 2018 - Legacy SIEM to Splunk, How to Conquer Migration ...Splunk
Presented at SplunkLive! Frankfurt 2018:
Introduction
SIEM Migration Methodology
Use Cases
Datasources & Data Onboarding
ES Architecture
Third-Party Integrations
You Got This!
SplunkLive! Munich 2018: Get More From Your Machine Data Splunk & AISplunk
Presented at SplunkLive! Munich 2018:
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
How to analyze text data for AI and ML with Named Entity RecognitionSkyl.ai
About the webinar
The Internet is a rich source of data, mainly textual data. But making use of huge quantities of data is a complex and time-consuming task. NLP can help with this problem through the use of Named Entity Recognition systems. Named entities are terms that refer to names, organizations, locations, values etc. NER annotates texts – marking where and what type of named entities occurred in it. This step significantly simplifies further use of such data, allowing for easy categorization of documents, analyze sentiments, improving automatically generated summaries etc.
Further, in many industries, the vocabulary keeps changing and growing with new research, abbreviations, long and complex constructions, and makes it difficult to get accurate results or use rule-based methods. Named Entity Recognition and Classification can help to effectively extract, tag, index, and manage this fast and ever-growing knowledge.
Through this webinar, we will understand how NER can be used to extract key entities from large volumes of text data
What you will learn
- How organizations are leveraging Named Entity Recognition across various industries
- Live demo - Identify & classify complex terms & with NERC (Named Entity Recognition & Categorization)
- Best practice to automate machine learning models in hours not months
SplunkLive! Frankfurt 2018 - Get More From Your Machine Data with Splunk AISplunk
Presented at SpluknLive! Frankfurt 2018:
Why AI & Machine Learning?
What is Machine Learning?
Splunk's Machine Learning Tour
Use Cases & Customer Stories
Wrap Up
Presented at SplunkLive! Paris 2018: Get More From Your Machine Data With Splunk AI
- Why AI & Machine Learning?
- What is Machine Learning?
- Splunk's Machine Learning Tour
- Use Cases & Customer Stories
SplunkLive! Zurich 2018: Get More From Your Machine Data with Splunk & AISplunk
This presentation discusses how Splunk and machine learning can help organizations get more value from their machine data. It describes how machine learning can improve decision making, uncover hidden trends, alert on deviations, and forecast incidents. The presentation provides an overview of Splunk's machine learning capabilities, including search, packaged solutions, and the machine learning toolkit. It also showcases several customer use cases that have benefited from Splunk's machine learning offerings, such as network incident detection, security/fraud prevention, and optimizing operations.
SplunkLive! Paris 2018: Legacy SIEM to SplunkSplunk
Presented at SplunkLive! Paris 2018: Legacy SIEM to Splunk, How to Conquer Migration and Not Die Trying:
- Why?
- SIEM Replacement
- Use Cases
- Data Sources & Data Onboarding
- Architecture
- Third Party Integrations
- You Got This
-
Model Monitoring at Scale with Apache Spark and VertaDatabricks
For any organization whose core product or business depends on ML models (think Slack search, Twitter feed ranking, or Tesla Autopilot), ensuring that production ML models are performing with high efficacy is crucial. In fact, according to the McKinsey report on model risk, defective models have led to revenue losses of hundreds of millions of dollars in the financial sector alone. However, in spite of the significant harms of defective models, tools to detect and remedy model performance issues for production ML models are missing.
Based on our experience building ML debugging and robustness tools at MIT CSAIL and managing large-scale model inference services at Twitter, Nvidia, and now at Verta, we developed a generalized model monitoring framework that can monitor a wide variety of ML models, work unchanged in batch and real-time inference scenarios, and scale to millions of inference requests. In this talk, we focus on how this framework applies to monitoring ML inference workflows built on top of Apache Spark and Databricks. We describe how we can supplement the massively scalable data processing capabilities of these platforms with statistical processors to support the monitoring and debugging of ML models.
Learn how ML Monitoring is fundamentally different from application performance monitoring or data monitoring. Understand what model monitoring must achieve for batch and real-time model serving use cases. Then dig in with us as we focus on the batch prediction use case for model scoring and demonstrate how we can leverage the core Apache Spark engine to easily monitor model performance and identify errors in serving pipelines.
ADV Slides: What the Aspiring or New Data Scientist Needs to Know About the E...DATAVERSITY
Many data scientists are well grounded in creating accomplishment in the enterprise, but many come from outside – from academia, from PhD programs and research. They have the necessary technical skills, but it doesn’t count until their product gets to production and in use. The speaker recently helped a struggling data scientist understand his organization and how to create success in it. That turned into this presentation, because many new data scientists struggle with the complexities of an enterprise.
Advanced Use Cases for Analytics Breakout SessionSplunk
This document discusses Splunk's analytics capabilities and how to develop analytics for business users. It introduces personas as user types in a Splunk deployment beyond core IT. Requirements should be gathered for each persona, including their business problem, relevant data sources, and how they prefer to consume results. Searches and data models can then be developed and delivered through dashboards, visualizations, or third-party tools. Advanced analytics techniques discussed include anomaly detection, data visualization, predictive analytics, and demos. The document encourages reaching out for help from Splunk technical teams to grow analytics beyond IT.
Similar to Machine Learning and Analytics Breakout Session (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
Performance Budgets for the Real World by Tammy EvertsScyllaDB
Performance budgets have been around for more than ten years. Over those years, we’ve learned a lot about what works, what doesn’t, and what we need to improve. In this session, Tammy revisits old assumptions about performance budgets and offers some new best practices. Topics include:
• Understanding performance budgets vs. performance goals
• Aligning budgets with user experience
• Pros and cons of Core Web Vitals
• How to stay on top of your budgets to fight regressions
How Netflix Builds High Performance Applications at Global ScaleScyllaDB
We all want to build applications that are blazingly fast. We also want to scale them to users all over the world. Can the two happen together? Can users in the slowest of environments also get a fast experience? Learn how we do this at Netflix: how we understand every user's needs and preferences and build high performance applications that work for every user, every time.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
What Not to Document and Why_ (North Bay Python 2024)Margaret Fero
We’re hopefully all on board with writing documentation for our projects. However, especially with the rise of supply-chain attacks, there are some aspects of our projects that we really shouldn’t document, and should instead remediate as vulnerabilities. If we do document these aspects of a project, it may help someone compromise the project itself or our users. In this talk, you will learn why some aspects of documentation may help attackers more than users, how to recognize those aspects in your own projects, and what to do when you encounter such an issue.
These are slides as presented at North Bay Python 2024, with one minor modification to add the URL of a tweet screenshotted in the presentation.
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
AC Atlassian Coimbatore Session Slides( 22/06/2024)apoorva2579
This is the combined Sessions of ACE Atlassian Coimbatore event happened on 22nd June 2024
The session order is as follows:
1.AI and future of help desk by Rajesh Shanmugam
2. Harnessing the power of GenAI for your business by Siddharth
3. Fallacies of GenAI by Raju Kandaswamy
2. 2
Disclaimer
During the course of this presentation, we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results
could differ materially. For important factors that may cause actual results to differ from those contained
in our forward-looking statements, please review our filings with the SEC. The forward-looking
statements made in the this presentation are being made as of the time and date of its live presentation.
If reviewed after its live presentation, this presentation may not contain current or accurate information.
We do not assume any obligation to update any forward looking statements we may make.
In addition, any information about our roadmap outlines our general product direction and is subject to
change at any time without notice. It is for informational purposes only and shall not, be incorporated
into any contract or other commitment. Splunk undertakes no obligation either to develop the features
or functionality described or to include any such feature or functionality in a future release.
6. 6
ML 101: What is it?
• Machine Learning (ML) is a process for generalizing from examples
– Examples = example or “training” data
– Generalizing = building “statistical models” to capture correlations
– Process = ML is never done, you must keep validating & refitting models
• Simple ML workflow:
– Explore data
– FIT models based on data
– APPLY models in production
– Keep validating models
“All models are wrong, but some are useful.”
- George Box
7. 7
3 Types of Machine Learning
1. Supervised Learning: generalizing from labeled data
8. 8
3 Types of Machine Learning
2. Unsupervised Learning: generalizing from unlabeled data
9. 9
3 Types of Machine Learning
3. Reinforcement Learning: generalizing from rewards in time
Leitner System Recommender systems
11. 11
IT Ops: Predictive Maintenance
1. Get resource usage data (CPU, latency, outage reports)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast resource saturation, demand & usage
5. Surface incidents to IT Ops, who INVESTIGATES & ACTS
Problem: Network outages and truck rolls cause big time & money expense
Solution: Build predictive model to forecast outage scenarios, act pre-emptively & learn
12. 12
Security: Find Insider Threats
Problem: Security breaches cause big time & money expense
Solution: Build predictive model to forecast threat scenarios, act pre-emptively & learn
1. Get security data (data transfers, authentication, incidents)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast abnormal behavior, risk scores & notable events
5. Surface incidents to Security Ops, who INVESTIGATES & ACTS
13. 13
Business Analytics: Predict Customer Churn
Problem: Customer churn causes big time & money expense
Solution: Build predictive model to forecast possible churn, act pre-emptively & learn
1. Get customer data (set-top boxes, web logs, transaction history)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast churn rate & identify customers likely to churn
5. Surface incidents to Business Ops, who INVESTIGATES & ACTS
14. 14
Summary: The ML Process
Problem: <Stuff in the world> causes big time & money expense
Solution: Build predictive model to forecast <possible incidents>, act pre-emptively & learn
1. Get all relevant data to problem
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast KPIs & notable events associated to use case
5. Surface incidents to X Ops, who INVESTIGATES & ACTS
Operationalize
16. 16
Splunk User Behavior Analytics (UBA)
• ~100% of breaches involve valid credentials (Mandiant Report)
• Need to understand normal & anomalous behaviors for ALL users
• UBA detects Advanced Cyberattacks and Malicious Insider Threats
• Lots of ML under the hood:
– Behavior Baselining & Modeling
– Anomaly Detection (30+ models)
– Advanced Threat Detection
• E.g., Data Exfil Threat:
– “Saw this strange login & data transfer
for user mpittman at 3am in China…”
– Surface threat to SOC Analysts
17. 17
Machine Learning in Splunk ITSI
Adaptive Thresholding:
• Learn baselines & dynamic thresholds
• Alert & act on deviations
• Manage for 1000s of KPIs & entities
• Stdev/Avg, Quartile/Median, Range
Anomaly Detection:
• Find “hiccups” in expected patterns
• Catches deviations beyond thresholds
• Uses Holt-Winters algorithm
18. 18
ML Toolkit & Showcase
• Splunk Supported framework for building ML Apps
– Get it for free: http://tiny.cc/splunkmlapp
• Leverages Python for Scientific Computing (PSC) add-on:
– Open-source Python data science ecosystem
– NumPy, SciPy, scitkit-learn, pandas, statsmodels
• Showcase use cases: Predict Hard Drive Failure, Server Power
Consumption, Application Usage, Customer Churn & more
• Standard algorithms out of the box:
– Supervised: Logistic Regression, SVM, Linear Regression, Random Forest, etc.
– Unsupervised: KMeans, DBSCAN, Spectral Clustering, PCA, KernelPCA, etc.
• Implement one of 300+ algorithms by editing Python scripts
20. 20
Analysts Business Users
1. Get Data & Find Decision-Makers
2
IT Users
ODBC
SDK
API
DB Connect
Look-Ups
Ad Hoc
Search
Monitor
and Alert
Reports /
Analyze
Custom
Dashboards
GPS /
Cellular
Devices Networks Hadoop
Servers Applications Online
Shopping Carts
Analysts Business Users
Structured Data Sources
CRM ERP HR Billing Product Finance
Data Warehouse
Clickstreams
21. 21
2. Explore Data, Build Searches & Dashboards
• Start with the Exploratory Data Analysis phase
– “80% of data science is sourcing, cleaning, and preparing the data”
– Tip: leverage ITSI KPIs – lots of domain knowledge
• For each data source, build “data diagnostic” dashboard
– What’s interesting? Throw up some basic charts.
– What’s relevant for this use case?
– Any anomalies? Are thresholds useful?
• Mix data streams & compute aggregates
– Compute KPIs & statistics w/ stats, eventstats, etc.
– Enrich data streams with useful structured data
– stats count by X Y – where X,Y from different sources
– Build new KPIs from what you find
22. 22
3. Fit, Apply & Validate Models
• ML SPL – New grammar for doing ML in Splunk
• fit – fit models based on training data
– [training data] | fit LinearRegression costly_KPI
from feature1 feature2 feature3 into my_model
• apply – apply models on testing and production data
– [testing/production data] | apply my_model
• Validate Your Model (The Hard Part)
– Why hard? Because statistics is hard! Also: model error ≠ real world risk.
– Analyze residuals, mean-square error, goodness of fit, cross-validate, etc.
– Take Splunk’s Analytics & Data Science Education course
23. 23
4. Predict & Act
• Forecast KPIs & predict notable events
– When will my system have a critical error?
– In which service or process?
– What’s the probable root cause?
• How will people act on predictions?
– Is this a Sev 1/2/3 event? Who responds?
– Deliver via Notable Events or dashboard?
– Human response or automated response?
• How do you improve the models?
– Iterate, add more data, extract more features
– Keep track of true/false positives
24. 24
5. Operationalize Your Models
• Operationalizing closes the loop of the ML Process:
1. Get data
2. Explore data & fit models
3. Apply & validate models
4. Forecast KPIs & events
5. Surface incidents to Ops team
• When you deliver the outcome, keep track of the response
– Human-generated response (detailed journal logs, etc)
– Machine-generated response (workflow actions, etc)
– External knowledge (closed tickets data, DB records, etc)
• Then operationalize: feed back Ops analysis to data inputs, repeat
• Lots of hard work & stats, but lots of value will come out.
Operationalize
26. 26
Next Steps with Splunk ML
• Reach out to your Tech Team! We can help architect ML workflows.
• Lots of ML commands in Core Splunk (predict, anomalydetection, stats)
• ML Toolkit & Showcase – available and free, ready to use
– Get it for free: http://tiny.cc/splunkmlapp
• Splunk UBA: Applied ML for Security
– Unsupervised learning of Users & Entities
– Surfaces Anomalies & Threats
• Splunk ITSI: Applied ML for ITOA use cases
– Manage 1000s of KPIs & alerts
– Adaptive Thresholding & Anomaly Detection
• ML New Product Initiative (NPI) Program:
– Connect with Product & Engineering teams - mlprogram@splunk.com
27. 27
SEPT 26-29, 2016
WALT DISNEY WORLD, ORLANDO
SWAN AND DOLPHIN RESORTS
• 5000+ IT & Business Professionals
• 3 days of technical content
• 165+ sessions
• 80+ Customer Speakers
• 35+ Apps in Splunk Apps Showcase
• 75+ Technology Partners
• 1:1 networking: Ask The Experts and Security
Experts, Birds of a Feather and Chalk Talks
• NEW hands-on labs!
• Expanded show floor, Dashboards Control
Room & Clinic, and MORE!
The 7th Annual Splunk Worldwide Users’ Conference
PLUS Splunk University
• Three days: Sept 24-26, 2016
• Get Splunk Certified for FREE!
• Get CPE credits for CISSP, CAP, SSCP
• Save thousands on Splunk education!
Editor's Notes
Time for ML demo!
Get the ML App: http://tiny.cc/splunkmlapp
Want more? Take Splunk’s Analytics & Data Science course!
Course prework: http://bit.ly/splunkanalytics
We’re headed to the East Coast!
2 inspired Keynotes – General Session and Security Keynote + Super Sessions with Splunk Leadership in Cloud, IT Ops, Security and Business Analytics!
165+ Breakout sessions addressing all areas and levels of Operational Intelligence – IT, Business Analytics, Mobile, Cloud, IoT, Security…and MORE!
30+ hours of invaluable networking time with industry thought leaders, technologists, and other Splunk Ninjas and Champions waiting to share their business wins with you!
Join the 50%+ of Fortune 100 companies who attended .conf2015 to get hands on with Splunk. You’ll be surrounded by thousands of other like-minded individuals who are ready to share exciting and cutting edge use cases and best practices. You can also deep dive on all things Splunk products together with your favorite Splunkers.
Head back to your company with both practical and inspired new uses for Splunk, ready to unlock the unimaginable power of your data! Arrive in Orlando a Splunk user, leave Orlando a Splunk Ninja!
REGISTRATION OPENS IN MARCH 2016 – STAY TUNED FOR NEWS ON OUR BEST REGISTRATION RATES – COMING SOON!