Join us to learn what is new in Splunk App for Stream and how it can help you utilize wire/network data analytics to proactively resolve applications and IT operational issues and to efficiently analyze security threats in real-time, across your cloud and on-premises infrastructures. Additionally, you will learn about Splunk MINT, which allows you to gain operational intelligence on the availability, performance, and usage of your mobile apps. You’ll learn how to instrument your mobile apps for operational insight, and how you can build the dashboards, alerts, and searches you need to gain real-time insight on your mobile apps.
Machine Data 101: Turning Data Into Insight is a presentation about using Splunk software to analyze machine data. It discusses topics such as:
- What machine data is and examples of common sources like log files, social media, call center systems
- How Splunk indexes machine data from various sources in real-time regardless of format
- Techniques for enriching data in Splunk like tags, field aliases, calculated fields, event types, and lookups from external data sources
- Examples of collecting non-traditional data sources into Splunk like network data, HTTP events, databases, and mobile app data
The presentation provides an overview of Splunk's machine data platform and techniques for analyzing, enrich
Splunk Enterpise for Information Security Hands-OnSplunk
Splunk is the ultimate tool for the InfoSec hunter. In this unique session, we’ll dive straight into the Splunk search interface, and interact with wire data harvested from various interesting and hostile environments, as well as some web access logs. We’ll show how you can use Splunk Enterprise with a few free Splunk applications to hunt for attack patterns. We’ll also demonstrate some ways to add context to your data in order to reduce false positives and more quickly respond to information. Bring your laptop – you’ll need a web browser to access our demo systems!
This document discusses Splunk's data onboarding process, which provides a systematic way to ingest new data sources into Splunk. It ensures new data is instantly usable and valuable. The process involves several steps: pre-boarding to identify the data and required configurations; building index-time configurations; creating search-time configurations like extractions and lookups; developing data models; testing; and deploying the new data source. Following this process helps get new data onboarding right the first time and makes the data immediately useful.
SplunkLive! München 2016 - Splunk Enterprise 6.3 - Data OnboardingSplunk
This document discusses new features in Splunk Enterprise 6.3, including breakthrough performance and scale improvements that double search and indexing speed and increase capacity by 20-50%, lowering total cost of ownership by 20%+. It also describes new capabilities for advanced analysis and visualization, high-volume event collection, and an enterprise-scale platform with improved support for DevOps, IoT data analysis, and third-party integrations. A new HTTP Event Collector provides a token-based JSON API for ingesting events from various sources.
Elevate your Splunk Deployment by Better Understanding your Value Breakfast S...Splunk
This document discusses how to better understand the value of a Splunk deployment through assessing data sources. It presents a data source assessment tool to map data sources to use cases and organizational groups to identify opportunities. The tool shows which data sources are indexed and overlap between groups. It aims to maximize benefits from machine data by supporting business objectives and enabling broader impact.
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
Democratising Security: Update Your Policies or Update Your CVExtraHop Networks
Security is everyone's responsibility. That’s the lesson learned as enterprises seek to improve their detection and response for cyber incidents. This session introduces a new model where InfoSec sets the policies and delegates monitoring to application teams.
The document summarizes Splunk Enterprise 6.3, highlighting key new features and capabilities. It discusses breakthrough performance and scale improvements including doubled search and indexing speed and 20-50% increased capacity. It also covers advanced analysis and visualization features like anomaly detection, geospatial mapping, and single-value display. New capabilities for high-volume event collection and an enterprise-scale platform with expanded management, custom alert actions, and data integrity control are also summarized.
Splunk conf2014 - Dashboard Fun - Creating an Interactive Transaction ProfilerSplunk
Using Simple XML and Splunk Enterprise, learn how to create easy interactive dashboards to explore data. This demo showcases great tools to put ion the hands of Splunk users, help desk users and IT Operations staff.
PaNDA - a platform for Network Data Analytics: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. PaNDA is a platform for data aggregation and distribution which can be used for data analytics applications being developed at Cisco. PaNDA was incubated in Intercloud and is now being further developed for the Virtual Managed Services (VMS) solution and other Cisco solutions. The session will details why we need a platform for OSS analytics and then how we tackle this point.
This document provides an overview of data enrichment techniques in Splunk including tags, field aliases, calculated fields, event types, and lookups. It describes how tags can add context and categorize data, field aliases can simplify searches by normalizing field labels, and lookups can augment data with additional external fields. The document also discusses various data sources that Splunk can index such as network data, HTTP events, alerts, scripts, databases, and modular inputs for custom data collection.
In addition to seeing the latest features in Splunk Enterprise, learn some of the top commands that will solve most search and analytics needs. Ninja’s can use these blindfolded. New features will be demonstrated in the following areas: TCO and Performance Improvements, Platform Management and New Interactive Visualizations.
Power of Splunk Search Processing Language (SPL)Splunk
The document discusses Splunk's Search Processing Language (SPL) for searching and analyzing machine data. It provides an overview of SPL and its commands, and gives examples of how SPL can be used for tasks like searching, charting, enriching data, identifying anomalies, transactions, and custom commands. The presentation aims to showcase the power and flexibility of SPL for tasks like searching large datasets, visualizing data, combining different data sources, and extending SPL's capabilities through custom commands.
This document discusses how Splunk can help organizations address challenges related to escalating IT complexity. It notes that IT environments have become more complex with disconnected point solutions, over 70% of time spent maintaining rather than innovating, and latency in resolving issues measured in hours or days. Splunk provides a single platform to gather, analyze, and search machine data from various sources in real-time. It allows correlating data across silos for faster problem resolution. The document highlights how Splunk reduced escalations by 90% and mean time to resolution by 67% for one customer. It then discusses how Splunk offers pre-built apps for monitoring different parts of the IT infrastructure and applications.
Splunk is a powerful platform that can harness your machine data and turn it into valuable information thereby enabling your business to make informed decisions, taking your organization from reactive to proactive. Just like any other platform, Splunk is only as powerful as the data it has access to, therefore in this session we will be conducting a walk thru of how to successfully on-board data, with samples of data ranging from simple to complex. We will also be taking a look at how to use common TA’s to bring valuable data into Splunk. This session is designed to give you a better understanding of how to onboard data into Splunk enabling you to unlock the power of your data.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
SplunkLive! München 2016 - Splunk für SecuritySplunk
This document provides an overview of Splunk's security analytics and user behavior analytics capabilities for detecting threats like cyber attacks and insider threats. It discusses how Splunk uses machine learning and behavioral analytics on large datasets to detect anomalies and threats. Examples are given showing how Splunk can detect suspicious user activities across the cyber kill chain and identify external attacks and insider threats. Key workflows for security analysts and threat hunters using Splunk are also outlined.
Splunk for IT Operations Breakout SessionGeorg Knon
This document discusses how IT complexity is a challenge for CIOs due to siloed technologies, disconnected point solutions, and time spent maintaining rather than innovating. It presents Splunk as a solution that provides comprehensive visibility across infrastructure, applications, databases, and more through centralized data collection and analysis. Splunk reduces problem resolution time by 67% and escalations by 90% by enabling "first responders" to search across all IT data from a single interface. The document also outlines how Splunk apps can provide insights by role and technology and its capabilities for various IT functions like virtualization, storage, and operating systems.
Jello Francisco is a registered architect in the Philippines seeking a position on an architectural/design team. He has over 10 years of experience in architectural documentation using ArchiCAD and AutoCAD. Francisco has worked on residential and commercial projects in the Philippines and received training in Western Australia on documentation, construction practices, and standards. He is proficient in documentation software, modeling, project management, and communication skills.
This document provides an overview and agenda for the Splunk App for Stream, including:
- The architecture of the Stream Forwarder for capturing wire data and routing it to Splunk.
- The architecture of the App for Stream for analyzing wire data in Splunk.
- Examples of deployment architectures for ingesting wire data.
- A customer use case where wire data from the network helped provide visibility that log data could not due to access restrictions.
This document discusses Splunk Spark Integration. It provides an overview of Splunk including its products, customers, and deployment architecture. It then discusses the benefits of integrating Splunk with Spark, including leveraging Spark's powerful computing capabilities for machine learning while indexing unstructured data in Splunk. Finally, it presents three potential solutions for the integration and some challenges to address like avoiding moving large amounts of data and maintaining a good user experience while adapting to Splunk's query language.
This document discusses new capabilities in Splunk's App for Stream and Splunk MINT products. It begins with an introduction and overview of each product. It then discusses key benefits like real-time insights, efficient cloud data collection, and fast time to value. Example use cases are provided for IT operations, security, and applications visibility. Supported protocols, platforms, and architecture options are also outlined. The document concludes by discussing challenges in mobile app delivery and how Splunk MINT addresses them through mobile data collection and correlation with other data sources.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
If mobile apps are part of your business, having real-time insight on app performance, crashes, usage and transactions is critical. Data derived directly from mobile app usage—called “mobile data”—can help you deliver better performing apps and increase application visibility. With the massive increase in smartphone and mobile app usage, your app’s performance is more important than ever. Learn how to gain Operational Intelligence from your mobile apps with Splunk MINT.
If mobile apps are part of your business, having real-time insight on app performance, crashes, usage and transactions is critical. Data derived directly from mobile app usage—called “mobile data”—can help you deliver better performing apps and increase application visibility. With the massive increase in smartphone and mobile app usage, your app’s performance is more important than ever. Learn how to gain Operational Intelligence from your mobile apps with Splunk MINT.
If mobile apps are part of your business, having real-time insight on app performance, crashes, usage and transactions is critical. Data derived directly from mobile app usage—called “mobile data”—can help you deliver better performing apps and increase application visibility. With the massive increase in smartphone and mobile app usage, your app’s performance is more important than ever. Learn how to gain Operational Intelligence from your mobile apps with Splunk MINT.
If mobile apps are part of your business, having real-time insight on app performance, crashes, usage and transactions is critical. Data derived directly from mobile app usage—called “mobile data”—can help you deliver better performing apps and increase application visibility. With the massive increase in smartphone and mobile app usage, your app’s performance is more important than ever. Learn how to gain Operational Intelligence from your mobile apps with Splunk MINT.
Splunk Webinar: IT Operations Demo für Troubleshooting & DashboardingGeorg Knon
This document provides an overview of Splunk's IT operations software. It discusses the challenges facing IT operations, including siloed tools and reactive problem solving. It presents Splunk as a solution, with its ability to index and analyze machine data from any source in real-time. Key benefits highlighted include faster troubleshooting to reduce downtime, proactive monitoring to address issues before they become problems, and increased operational visibility across the IT environment. The document concludes with a demonstration of Splunk's IT service intelligence capabilities.
Here’s your chance to get hands-on with Splunk for the first time! Bring your modern Mac, Windows, or Linux laptop and we’ll go through a simple install of Splunk. Then, we’ll load some sample data, and see Splunk in action – we’ll cover searching, pivot, reporting, alerting, and dashboard creation. At the end of this session you’ll have a hands-on understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll experience practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Splunk for Industrial Data and the Internet of Thingsaliciasyc
The IoT is a natural evolution of the world’s networks. Just as people became more connected by devices and applications during the explosion of the social media revolution, devices, sensors and industrial equipment are also becoming more connected—and are consuming and generating data at an unprecedented pace. Disparate and deployed connected devices can provide a unique touchpoint to real-world operations and conditions. Only few architectures and applications are designed to handle the constant streams of real-time events, sensor readings, user interactions and application data produced by massive numbers of connected devices. Use Splunk to collect, index and harness the power of the machine data generated by connected devices and machines deployed on your local network or around the world.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
So what can Ultra Low, consistent latency deliver? Low latency is a requirement for intensive, time critical applications. Latency is measure on a port-to-port basis, that once a frame is received on a ingress port how long does it take the frame to go through the internal switching infrastructure and leave an ingress port. The Summit X670 Top of Rack switch supports latency of around 800-900usec while the Black Diamond chassis, BDX8, can switch frames in a little as 3usec. We’re big believers in the value of disaggregation – of breaking down traditional data center technologies into their core components so we can build new systems that are more flexible, more scalable, and more efficient. This approach has guided Facebook from the beginning, as we’ve grown and expanded our infrastructure to connect more than 1.28 billion people around the world.
Flatter networks. Traditional data center networks have a minimum of three tiers: top of rack (ToR), aggregation and core. Often, there is more than one aggregation tier, meaning the data center could have three or more network tiers. When network traffic is primarily best effort, this is sufficient. But as more mission-critical, real-time traffic flows into the data center, it becomes critical that organizations move to two-tier networks.
An increase in east-west traffic flows. Legacy data center networks are designed for traffic to flow from the edge of the network into the core and then back to the edge in a north-south direction. Today, however, factors such as workforce mobility, Hadoop, big data and other applications are driving east-west traffic flows from server to server.
Virtualization of other IT assets. Historically, compute resources such as processor, memory and storage were resident in the server itself. Over time, more and more of these resources are being put into “pools” that can be accessed on demand. In this case, the data center network becomes a “fabric” that acts as the backplane for the virtualized data center.
Agile Gurugram 2023 | Observability for Modern Applications. How does it help...AgileNetwork
This document discusses observability for modern applications. It begins by defining observability as the ability to observe what is happening inside a system. Observability helps measure key performance indicators and allows teams to react faster to issues. In cloud native environments, observability fits by instrumenting applications to capture logs, traces, metrics and health data which are then transmitted to analytics tools. The document outlines the different pillars of application instrumentation - logs to see what happened, traces to see how it happened, metrics to see how much happened, and health checks to see system status. It discusses OpenTelemetry as an open source observability framework to address prior vendor lock-in issues and competing standards.
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
This document discusses integrating metrics and logs in Splunk for enhanced troubleshooting and monitoring. It provides an overview of metrics and how they are defined, compared to events. Metrics support in Splunk allows for more efficient aggregation, storage, and analysis of time-series data. Example use cases mentioned include IT operations, application performance monitoring, and IoT. Pricing is still based on uncompressed data volume ingested, with each metrics measurement licensed at around 150 bytes.
This document discusses how Splunk provides new visibility and analytics for IT operations. It notes that IT environments are becoming increasingly complex with more servers, applications, virtualization, and cloud services. Splunk offers a platform for operational intelligence that can consolidate machine data from various sources and provide search, monitoring, and analytics capabilities. It also discusses how Splunk apps can provide deep insights into specific technology areas.
Virtual SplunkLive! for Higher Education Overview/CustomersSplunk
The document outlines the agenda for a virtual SplunkLive! event for higher education on January 28, 2015. It includes an overview of Splunk, presentations from various universities on their Splunk implementations, and breakout sessions on getting started with Splunk, security, and IT operations. It also provides information on Splunk products and capabilities for IT operations, security, application delivery, business analytics, industrial data, and the Internet of Things.
SplunkLive! Amsterdam 2015 Breakout - Getting Started with SplunkSplunk
Filip Wijnholds is a senior sales engineer at Splunk who joined the company in June 2015 after working at Intel Security for 4 years. He began his career in the networking industry working with packet capture software. The document provides an overview of Splunk's machine data platform and how it can ingest and analyze data from various sources. It also outlines the company's legal notices regarding forward-looking statements and product roadmaps.
SplunkLive! Zurich 2018: Monitoring the End User Experience with SplunkSplunk
This document discusses using Splunk to gain insights into end user experience and the factors that influence experience. Splunk provides a platform approach to monitor applications across the full technology stack from networks to databases. It can ingest data from various sources, including APM tools, and provide visibility into both instrumented and non-instrumented applications and environments. Splunk also offers predictive analytics capabilities and allows various stakeholders like operations and business teams to access and analyze data. The document demonstrates how Splunk can help organizations improve user experience, application performance, and collaboration between teams.
The document discusses how networks need to change to accommodate new demands like mobility, virtualization, and changing traffic patterns. It notes challenges around centralized management, flexibility, and cost reduction. New approaches are needed to close the gap between business needs and what traditional IT can deliver. The document advocates for software-defined networking and open architectures to provide innovation, flexibility, and efficiency through an ecosystem of partners. This will allow networks to better support trends like cloud computing, big data, and security services.
Similar to What’s New: Splunk App for Stream and Splunk MINT (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
AC Atlassian Coimbatore Session Slides( 22/06/2024)apoorva2579
This is the combined Sessions of ACE Atlassian Coimbatore event happened on 22nd June 2024
The session order is as follows:
1.AI and future of help desk by Rajesh Shanmugam
2. Harnessing the power of GenAI for your business by Siddharth
3. Fallacies of GenAI by Raju Kandaswamy
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Hire a private investigator to get cell phone recordsHackersList
Learn what private investigators can legally do to obtain cell phone records and track phones, plus ethical considerations and alternatives for addressing privacy concerns.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threatsanupriti
In the rapidly evolving landscape of blockchain technology, the advent of quantum computing poses unprecedented challenges to traditional cryptographic methods. As quantum computing capabilities advance, the vulnerabilities of current cryptographic standards become increasingly apparent.
This presentation, "Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threats," explores the intersection of blockchain technology and quantum computing. It delves into the urgent need for resilient cryptographic solutions that can withstand the computational power of quantum adversaries.
Key topics covered include:
An overview of quantum computing and its implications for blockchain security.
Current cryptographic standards and their vulnerabilities in the face of quantum threats.
Emerging post-quantum cryptographic algorithms and their applicability to blockchain systems.
Case studies and real-world implications of quantum-resistant blockchain implementations.
Strategies for integrating post-quantum cryptography into existing blockchain frameworks.
Join us as we navigate the complexities of securing blockchain networks in a quantum-enabled future. Gain insights into the latest advancements and best practices for safeguarding data integrity and privacy in the era of quantum threats.
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
3. Disclaimer
During the course of this presentation, we may make forward looking statements regarding future events or the
expected performance of the company. We caution you that such statements reflect our current expectations and
estimates based on factors currently known to us and that actual events or results could differ materially. For important
factors that may cause actual results to differ from those contained in our forward-looking statements, please review
our filings with the SEC. The forward-looking statements made in the this presentation are being made as of the time
and date of its live presentation. If reviewed after its live presentation, this presentation may not contain current or
accurate information. We do not assume any obligation to update any forward looking statements we may make. In
addition, any information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not, be incorporated into any contract or other
commitment. Splunk undertakes no obligation either to develop the features or functionality described or to include
any such feature or functionality in a future release.
5. See Everything with Splunk App for Stream
Enables real-time
insights into private,
public and hybrid
cloud infrastructures
Delivers rapid
deployment, easy
scale out and efficient
wire data capture
Capture and analyze
critical events not
found in logs or with
other collection
methods.
1 2 3
Enhance Operational Intelligence With Wire Data Capture
6. Example: What Is Available From The Wire
Performance Metrics
Round Trip Time
Client Request Time
Server Reply Time
Server Send Time
Total Time Taken
Base HTML Load Time
Page Content Load Time
Total Page Load Time
Application Data
POST Content
AJAX Data
Section
Sub-Section
Page Title
Session Cookie
Proxied IP Address
Error Message
Business Data
Product ID
Customer ID
Shopping Cart ID
Cart Items
Cart Values
Discounts
Order ID
Abandoned?
7. Ad-hoc Analysis On Wire Data Is Challenging
Volume, velocity
and variety
make it difficult to collect,
explore, analyze and
visualize wire data
Distributed
datacenters
introduce challenges in
accessing wire data from
public and hybrid clouds
Complex network
environments
make installation and
management of probes
and appliances laborious
8. 8
Enable New Operational Insights
• Add information about application, infrastructure, security and
business activity, without needing instrumentation
• Support new and extends existing Splunk use cases across IT, security
and the business with wire data capture
Enhanced Operational
Intelligence
Efficient, Cloud-Ready Wire
Data Collection
Fast Time to Value
• Gain visibility into any public, private or hybrid cloud infrastructures
with a software solution
• Control data collection volumes with fine-grained protocol and
attribute filtering
• Deploy quickly from interface-driven install
• Enable rapid incident response
• Easily scale out with centralized management
9. Better Insights for IT Operations
• Get real-time granular insights to
reduce MTTR without costly
appliances
• Analyze all applications and user
behavior, measure application
response times and trace
transaction paths
• Identify infrastructure performance
issues, capacity constraints,
changes and establish baselines
Value
+
Contextual
Data
Application logs,
infrastructure (storage,
network, server) logs,
performance metrics,
events
9
SQL queries, DNS records, IP
conversations,
transaction traces, ICA
latency, response times
Wire Data
10. Better Insights for Security
• Real-time DPI of wire data backed with
analytics enables easier forensics
analyses and quicker incident response
• Analyze all user and applications
behavior and respond timely to threats
with cost efficient real-time header
and payload field extraction
• Baseline network traffic and
understand anomalies associated with
advanced and insider threats
• Quick software install at end points,
network infrastructures and cloud
without expensive appliances
Value
+
Contextual
Data
Firewall logs, application
logs, IDS logs, network logs,
perf. metrics, events
10
User and application traffic,
protocol identification (TCP,
DNS, HTTP, etc.), protocol
headers & payload
extraction, SSL decryption
Wire Data
11. Stream Forwarder Architecture
Protocol
Decoder
(Deep Packet
Inspection)
EventsDecryption
Request/Re
sponse
Network
Interface
(eth1)
Standard Out
(To Splunk Forwarder)
Packets
Streams
Request/Re
sponse
Request/Re
sponse
Protocol
Decoder
(Deep Packet
Inspection)
EventsDecryption Standard Out
(To Splunk Forwarder)
Protocol
Decoder
(Deep Packet
Inspection)
EventsDecryption Standard Out
(To Splunk Forwarder)
Network
Interface
(ethN)
Packets
…
Threads
11
12. Supported Protocols and Platforms
• UDP
• TCP
• HTTP
• IMAP
• MySQL
(login/cmd/query)
• Oracle (TNS)
• PostgreSQL
• Sybase/SQL Server
(TDS)
• FTP
• SMB
• NFS
• POP3
• SMTP
• LDAP/AD
• SIP
• XMPP
• AMQP
• MAPI
• IRC
Supports Windows 7 (64-bit), Windows 2008 R2 (64 bit), Linux (32-bit/64-bit) and Mac OSX (64-bit)
• DNS
• DHCP
• RADIUS
• Diameter
• BitTorrent
• SMPP
12
Improved performance requiring less compute/memory power!
14. Architecture: Run on Servers
14
End Users
Firewall
Splunk
Indexers
Search head
Physical or Virtual Servers
Universal Forwarder
Splunk_TA_stream
Internet
Physical Datacenter,
Public or Private Cloud
15. Applications Visibility for Easy Capacity Planning
AVP of Networks and Communications,
Large National Bank
“I enjoyed using the Splunk App for Stream as it's
giving us a bunch of different perspectives on
our traffic and better granularity compared to
some of the other tools we used. Stream is
unique because Splunk analytics are tied to a
network monitoring tool.”
• Granular application and network visibility drives
easy remediation
• Proactive applications and network traffic
monitoring enables better capacity planning
• Powerful analytical engine enables data analyses by
novice users
Key Customer Benefits
Deployment
• Quick host-based deployment at critical network
segments
– Ability to observe both client and server traffic
15
16. Wire Data Intelligence Improves Security
Security Analyst,
Payment Processing Company
“The thing that makes the Stream app better
than any other packet analysis solution out
there is the statistical analysis from Splunk
Enterprise. You can apply it freely to all of the
wire data, which enables me to analyze this
data in ways not possible before. This visibility
help us prevents external infiltration and avoid
malicious attacks.”
• Real-time security intelligence to prevent attacks and
infiltrations
• Baselining, trending and applying analytics to detect
anomalies in traffic (mySQL, postgres, etc.)
• Centralized management of all wire data results in
operational cost savings
• Efficient monitoring of user authentications for audit
and security
Key Customer Benefits
Deployment
• Non-intrusive and easy monitoring of server
communication
• Flexible and easy integration with existing Splunk
security dashboards
16
17. Wire Data Speeds Up Forensics
Security Engineer,
Financial Services Institution
“The biggest value of Stream is how fast can we
resolve and close security cases. Before Stream,
I had to collect data from multiple systems and
it would take me an hour. With Stream,
information is already there and I can get
answers within 5 minutes. It is much easier to
get data now.”
• 90% reduction in incident triage and investigation time
• Deeper, quicker and easier understanding of traffic and
user activity for forensic purposes
• Immediate insights and improved data collection:
– Elimination of moving pcap files around between
several tools
Key Customer Benefits
Deployment
• Flexible and easy deployment on key network
locations
17
19. • New OS versions
break apps
• Network issues are
difficult to find and
simulate
• Limited time to make
changes and fixes
The Challenges of Delivering Mobile Apps
19
• Plan for growth
• Solve infrastructure,
API and app issues
• Feature usage
• Monitor/analyze
user behavior
• Deliver omni-channel
analytics
• Mobile+web+desktop
Form Factor, Platform,
Interaction Style
Variety
Rapid App Dev Cycles,
Break-Fix Needs
Infrastructure Analytics
• OS and device-
centric development
• Need to correlate
devices, versions
20. Mobile App Delivery: Different Challenges for Different Roles
20
• How do I find the root cause of app crashes/poor performance?
• What were users doing when the issue happened?
• How do I get more insight into transaction paths?
• Is the problem with the app, the network or the backend system?
• Do I have the right capacity in place to handle transaction volume?
• How does performance compare mobile vs. web vs. desktop?
• How are customers using my app?
• Which features should I prioritize for future versions?
• How does customer behavior compare across channels?
APP MANAGERS/
OPERATIONS
PRODUCT MANAGERS/
BUSINESS OWNERS
MOBILE APP
DEVELOPERS
21. Enhance Operational Intelligence Using Mobile Data
21
Deliver Better
Performing, More
Reliable Apps
Deliver Real-Time
Analytics
Achieve End-to-End
Visibility
22. How Splunk MINT Works
• Embed Splunk MINT SDKs in your
mobile app
• Activate with one line of code
• Your app’s operational data is
securely transmitted to the Splunk
MINT Data Collector
• Analyze your mobile operational
data using the Splunk MINT App
• Correlate the data with other
sources using Splunk Enterprise
22
Mobile App Operations Data
Splunk MINT Data Collector
Real-time Mobile Operational Analytics
23. Deliver Better Performing, More Reliable Apps
• Improve user retention by quickly
identifying crashes and
performance issues
• Immediate insight on transaction
performance and causes of
transaction failures
• Identify network performance
issues and assess how they impact
your app
23
Real-time monitoring of crashes and performance
24. Achieve End-to-End Visibility
• Correlate Splunk MINT data with
other Operational Intelligence for
end-to-end transaction analysis
• Use Splunk Enterprise search
capabilities to correlate and drill
down into your mobile and non-
mobile data
24
Use correlations to get comprehensive insights
25. Deliver Real-Time Analytics
• Network performance: Create
dashboards that compare network
performance by carrier (Wi-Fi, LTE
networks, etc.)
• Geolocation: Gain insight on usage
and performance by where users
are located
• Search and Pivot: Utilize search
and analytics capabilities to
explore your mobile data
25
Get granular insights into your app and its users
26. Getting Started With Splunk MINT
26
Mobile Developers
Sign up on
mint.splunk.com
Download SDKs
and create mobile projects
Download Splunk
Enterprise
Splunk Admin
Re-deploy Splunk MINT
enabled apps
Check Splunk MINT
Management console
Download the Splunk
MINT App
Run Wizard to connect
to the Splunk MINT
Data Collector
Get dashboards and
search, correlate
27. MINT Benefits Developers and the Business
27
• Immediate quality insights
• User, usage, transaction, network visibility
• Fast time-to-value with lightweight SDK
• Find bottlenecks across app, network, backend, APIs
• Right size capacity for transaction volumes
• Ensure performance across all channels
• User behavior, user experience insights
• Faster, more valuable improvements
• Omni-channel analytics
APP MANAGERS/
OPERATIONS
PRODUCT MANAGERS/
BUSINESS OWNERS
MOBILE APP
DEVELOPERS
28. Three Takeaways
Splunk App for Stream
helps you see
everything!
Splunk MINT helps
you deliver more
reliable and better
performing mobile
apps!
Use Splunk software
for an end-to-end
view of your critical
applications!
1 2 3
Editor's Notes
Without our sponsors we couldn’t be here today. So please stop by outside this room in the pavilion. Thanks to all of you for being here and most of all sponsoring our happy hour!
Splunk App for Stream is a free App that enables you to capture, visualize and analyze data in much more granular way then ever before. You can see everything – ALL user and applications behavior ],response times from every layer, DNS information, storage traffic, network traffic, your websites content, connections. Once this data is in Splunk you can correlate it with other data for much more comprehensive visibility. First Splunk App for Stream is a way of get wire data into Splunk Enterprise. By adding this comprehensive source of machine data, it enables you to extend Operational Intelligence use cases across IT security and the business. It is a software only solution with the ability that can be installed on VM on any host, it enables real-time insights into multi-cloud environments. And as such, it is easy to install anywhere on most of standard machines, it is a passive very efficient way to capture data.
What can you get out of wire data that you don’t already get from other machine data?
There is a small amount of overlap between wire data and other data that we’ve captured so far. For example, web server logs typically record status codes such as HTTP 200 response, indicating whether a web page was rendered properly to a client. However, what is missing is transaction payload information – that means, it will not be able to show which of these HTTP 200 responses were for pages with a “service unavailable” message. This information is contained in wire data or transaction payload and is not logged by the server. Can you get this from log data – yes, if you instrument the code. And that is the beauty of wire data – it does not require any instrumentation of the application.
While wire data is a golden source of operational performance information, it is very challenging to deal with. It is high-volume, running to petabytes of raw data a day; it is high-velocity, with higher speed interfaces such as 10 GBps and 40 GBps becoming the new standard capacity in datacenters and ever increasing capacity in the cloud; it is high-variety, with a multitude of application protocols and styles of transactions in use. Wire data can also be difficult to harvest in a scalable manner. There is typically dozens of potential instrumentation points on the wire within a single data center where valuable application and operational data can be obtained. This easily extends to hundreds of instrumentation points distributed across a global enterprise. As well, an accurate representation of the wire data is required to maximize its operational value.
With this app users can capture application transaction times, transaction paths, network performance, and even database queries. Correlating wire data with other application and infrastructure data in Splunk software such as logs, metrics and events, As a result users are getting insights about app, service or network availability, performance and usage of their services. IT admins can pinpoint root-cause, proactively monitor the performance and availability of their individual technology silos, map dependencies of infrastructure to applications and trend performance to establish baselines. For security, wire data extends itself into rapid incident investigation. more complete threat detection, expanded monitoring and compliance. For business, wire data also captures user interactions and process insights for a deeper understanding of the user experience to support multiple business analytics use cases.
The Splunk App for stream enables efficient, cloud-ready wire data collection with a single software solution. This provides real-time visibility into any public, private or hybrid cloud infrastructure through insights from wire data. Additionally, customers can now securely decrypt SSL encrypted data for data completeness. Capture only the relevant wire data for analytics, through filters and aggregation rules. The app provides the ability to control and manage wire data volumes with fine-grained precision by selecting or deselecting protocols and associated attributes within the App interface
Lastly, can be rapidly deployed to collect wire data in real time to gain network visibility that is otherwise unavailable from cloud implementations and hard to achieve with traditional datacenters. Now, customers can quickly respond to any issue with a simple interface-driven installation, centralized deployment and configuration across IT environments of all sizes.
So let’s start with IT Operations – You can capture IT relevant data set from network and enrich it with existing data in Splunk such as infrastructure and application logs and events.You capture the content of database queries, granular IP conversations, transaction traces, applications response times. As a result, they will have granular visibility into infrastructure performance, resources utilization, or solve capacity bottlenecks. They can have visibility into applications availability, performance and usage and relation of it to underlying infrastructure components. IT admins can establish better baselines and trending for application performance and usage, and enable better IT and business decision making. This all results in faster resolutions of problems with fewer people.
Stream brings huge benefits for your security practitioners.. It is particularly interesting as you are most likely used to packet sniffing for forensic and real time analysis. Data captured contains all user activity and behavior as well as applications behavior. With Stream security customers can perform deep protocol inspection understanding at a very granular level what is going in. This can be used both in real time to understand risks or to perform response to an incident. In addition, security investigators can observe daily or seasonal traffic patterns so that they can immediately react when these become anomalous– they can respond to insider threats. See when someone is emailing IP out or if someone is trying to mimic the database queries to trying to gain access to your internal databases. Stream extracts both header and payload information for very deep granular insights for incident response and threat prevention. It is very important to mention that it can be deployed anywhere into end points, without you need to buy having to by expensive appliances. Very important when customer is a breach conditions.
Backup
Protocol header and data decoding: HTTP, DNS and email protocols (e.g. IMAP, POP3 and SMTP) are the dominant attack and exfiltration vectors for some of the most damaging breaches. Streams can be deployed to acquire header information (HTTP and email) and payload information (DNS) to drive sophisticated analytics for threat detection, incident response, intelligence gathering and threat prevention.
Rapid deployment and response: When incident investigation or analysis or tracking down malware requires additional real-time information from network traffic, threat responders can leverage Stream’s simple and rapid deployment via Splunk to start getting wire data from the system of interest to Splunk. This is useful under breach conditions – where a known infiltration may be in progress.
And finally, events are generated based on the Stream configuration from “App for Stream” and passed on to the UF as modular input data (streaming standard output) in JSON format.
Here is the current list of protocols that are supported. We also now support Windows OS and also have improved performance. Here we see currently supported protocols and platforms. Talk with your customers and them if there is any other protocol they find extremely useful that they would like to be added. And also ask them why would need particular protocol to be added.
We can get wire data directly from the “wire” by installing our wire data collector (the TA) on a dedicated, physical server. This server then receives a passive network copy from a SPAN/(TAP) or packet broker which would transport the “real” wire data of interest to the software.
Alternatively, the data collector can live directly on the systems of interest as a lightweight agent, where the systems can be either physical or virtual. In both cases the data collectors are actually TAs and therefore need to cohabitate with a forwarder.
In this example, the Stream is deployed in of the large national banks out of Texas. They had acquired branches around the country and in the process integrating them with the hq datacenters. They have several months to do the integration. They are using Stream to better understand the traffic that is going across key links not only within the country but also international. Stream gives them very granular visibilty into any traffic, they can understadn top talkers vs top communicators. They can apply analysis to trigger an alert if the traffic utilization is over specific threshold. And the data is used by new IT personnel. What they are getting from Stream that they cannot get from these other tools Is Splunk analytics behind. With other tools they can get some data but the granularity is not there. And many of the tools don’t look at client perspective.
Example: With Stream and Splunk this customer can perform granular analytics they could not do with other tools. “ With other tools I can look at my conversations or all my bytes coming across are, you know, 50 percent of that is, you know, one host, you have thrown a load on that. I can alert when the bandwidth is 85 percent, right? I can do that all day long with other tools But I can't necessarily go look at the traffic and alert on, "Hey, this is I.P. address is taking all the bandwidth. That and much more I can do with Stream”.
This is a company that has deployed Splunk in financial industry and specifically in SaaS based payment processing. They are deploying Stream to monitor wire data traffic in their internal communication as they can easily detect anomalies in traffic. For example, they are able to look into database traffic mySQL and postgres traffic and detect issues with user authentication and more. They are looking at what type of data is being sent at their SQL and postgres servers. One of the biggest value for them is that they are able to apply Splunk statistical analysis on wire data and normalize the quiries so that they can prevent external infiltration and avoid malicious attacks. Both in real-time and historically, they are able to set baselines in the amount and type of their database communication . By doing that they were able prevent injection of malicious queries, ensuring there were no attacks on their servers. They were able to integrate wire data in existing security dashboards and proactively look for any abnormalities in communication. They are also able to look for unexpected traffic such as IRC communication or look for exposed passwords in the user authentication. Protocols: MySQL, postgres , LDAP, RADIUS, IRC, SMB, FTP.
This is a customer from one of the banking institutions in US. They have deployed Stream to monitor data on DMZ and on egress at the points where there is visibility across all the traffic. They wanted to simplify the data collection for forensics purposes. They did not want to search multiple tools to get the data they are looking for. The value for Stream is how fast can they resolve and close security cases. They got Stream because they wanted to get to the so called “higher level” data. For example, logs from firewalls offered them a very basic info example such as this user tried to connect to this or that external website or that external user wanted to connect to this resource from the outside. They get IP destination port and that is it. From Stream we are getting better understanding of the traffic. Now they can answer these question: This user from the outside tried to issue an SQL injection. Once they have the IP address from firewall they can search the Stream and they can get the better view of what the user did. [The way they did it before was to get the pcap from the user based on the firewall log IP information. Now they don’t need to go and get the pcap to get into very minor detail. We can just look into Splunk and see that is actually what happened.] They are looking into lots of things from their IDS including alerts and things . SQL injection, exploit attempt, etc. If it is something new, we go and check Stream out for more details.
Before Stream one example would be as we would be going into IDS alert and bring that into a pcap and then look at pcap into another tool to see what happened, it would take me an hour. With Stream, if get data, enter source and destination IP the get this instantly. Then they can further determine whether I need to investigate more or not. With Stream it goes down to 5 min which is 90% reduction. It is much easier to get data now. ”
For them the ability to look at meta data for HTTP level data, and see the things such as the user agent, the response is valuable and very useful for someone in security domain
There are specific challenges in managing mobile apps which are different than traditional applications. Traditional apps are delivered to the user over a browser, and most of the magic is happening with the web, application and database servers. For mobile apps it is different: There are variety of form factors, tablets, smart phones, etc., and you have multiple OSs and interaction styles.
Mobile apps often have large number of releases in production. If you multiply the number of handset types by OS by specific versions of applications based on when users last updated them, there’s a huge number for mutations of potential mobile app clients to account for. Mobile operation, app owners, and mobile developers need to be able to determine if a certain application experience is unique to a particular release of the app.
Second, mobile apps are leaner, they’re easier to develop, and through “app stores” it is easy to push out new updates to users. But with every change, there’s risk of errors and issues that weren’t caught in development. Developers need to immediately know what went wrong so they can push better code in the next rev of an app. They have short window to make changes and fixes.
Third, unlike most enterprise apps, mobile devices and apps don’t generate a log file. As a result, if you want information about errors, exceptions, and so on, you have to instrument mobile apps with an SDK, identify what you want to measure, and where to send that information to.
Since app owners and developers are preoccupied with the first three areas I just mentioned, they are lacking analytics that would give them insights into feature usage and user behavior. Also, the experience that mobile apps provide needs to be correlated and compared with other application channels. Not only that, it’s important to understand how mobile applications influence application infrastructures for capacity planning and other reasons.
Mobile initiatives are new, and there’s no consistent model we’ve seen so far and how it’s organized. But we do find three kinds of stakeholders responsible for better mobile Operational Intelligence. App Operations, as the people who first get frustrated calls from end users, need to better isolate what’s going on and perform basic triage. App Developers need to understand the source of application crashes so they can quickly push better releases out to mobile users. Application Owners know that persistent problems will mean people abandon their app, so they want to know how people are using the application what experience they are receiving.
To address the needs of developers, operations and product management, you need Operational Intelligence for your mobile apps. This is what we call mobile intelligence. Mobile intelligence provides real-time insight on how your mobile apps are performing, and can correlate with and enhance Operational Intelligence.
Splunk software enables organizations to search, monitor, analyze and visualize machine-generated data from websites, applications, servers, networks, sensors and mobile devices. Splunk MINT helps organizations monitor mobile app usage and performance, gain deep visibility into mobile app transactions and accelerate development
Deliver better performing, more reliable apps
When a user has a problem with a mobile app, the issue could be isolated or spread across all app versions, handsets and OS types. With Splunk MINT, you can see issues with app performance or availability in real time. Bugs can be addressed quickly, and app developers can gain a head start in creating and delivering valuable app updates.
Achieve End-to-End visibility
When mobile apps fail, there are many potential sources of failure. With Splunk MINT, you can analyze overall transaction performance. And using Splunk MINT, you can correlate this data with information from back-end apps to gain detailed insight on transaction problems. As a result, operations can reduce MTTR and better anticipate future mobile app back-end requirements.
Deliver real-time analytics
Mobile apps give enterprises new ways of conducting digital business. With mobile app information in Splunk Enterprise, you can correlate usage and performance information— some call this omni-channel analytics—to better understand how users are engaging all aspects of your organization.
Unlike backend systems whose operational metrics are easily accessible, mobile applications require us to gain insight from all the mobile end points that use the app. There are three major components that make this work
First, mobile app developers embed Splunk MINT SDKs into the mobile apps they track. They can get the SDKs at mint.splunk.com. For basic app crash, performance, and user session insights, this requires as little as one line of code, which is well documented on mint.splunk.com. Once they redeploy their Splunk MINT apps, they are off and running.
Once applications are in production, information is automatically gathered and sent from each mobile endpoint to the Splunk MINT Data Collector. This information is encrypted, so there’s low security risk. Also, there is very low bandwidth and overhead required on the mobile endpoints to make this happen.
Information moves from this appointment data collector to the customers instance of Splunk enterprise, thanks to a Splunk add-on. That is enabled with a token that uniquely identifies their information. Information transfer between the Splunk MINT data collector in each customers instances Splun kenterprise is secured with the public key.
Once that information is in Splunk Enterprise, you can search, correlate, and analyze your mobile data. Also with the Splunk MINT app, you get a range of dashboards, over 40 reports, and a data model that helps you accelerate searches and correlations.
Now let’s talk about how Splunk MINT enables better performing, more reliable apps…
First, Splunk MINT captures information about the app crashes in real time, and provide that information back to you. Additionally, information on performance bottlenecks, Such as those that are caused by a slow API can be identified and brought back.
What makes this valuable is that this information is all being done in real time. Before Splunk MINT, developers had to rely on belated reports from iTunes, Google Play, etc. By the time they got notification of poorly performing apps , many people have abandoned the app, rated it poorly, and so on. With Splunk MINT, developers will get this information in a matter of seconds.
That’s most important, you can use Splunk MINT to correlate data from your mobile intelligence source type with other source types. Not only does this give you the ability to create a transaction analysis that is inclusive of the mobile app, it also allows you to start to think omni-channel – how the mobile experience is compared to and add value to other channels your organization is using.
Splunk Enterprise allows additional ways of visualizing your information. One great example of this is using geolocation information to get better insight on where mobile users are using your applications from, what you can see here.
Additionally, information on network performance is more granular. You can create dashboards that compare network performance by different mobile carriers, and you can also get more detailed information on user sessions.
Getting Splunk MINT up and running is rather straightforward, but does require action from both mobile developers as well as the person responsible for the Splunk deployment.
Mobile developers have a few key steps to follow. First, they go to mint.splunk.com and sign-up. This takes as little as two minutes, and give them access to SDKs and other resources required to easily integrate the SDKs into their mobile apps. Once they have embedded the Splunk MINT SDKs into their mobile apps, they redeploy the apps, and can quickly check to ensure mobile Operational Intelligence data coming in by checking the Splunk MINT Management Console.
Splunk administrators connect mobile data with their implementation of Splunk in a few easy steps. First, they download the Splunk MINT app and get a token from their sales person/fulfillment team that uniquely identifies them to the Splunk Data Collector. Then run the connection wizard (part of the app) and provide that token. Mobile data starts coming to that instance of Splunk – securely via PKI.
Across stakeholders, MINT provides tremendous benefits. For the mobile app developers, they are able to build better performing before reliable apps by getting immediate insights into performance and availability. They also know how their applications are being used, and can apply that information in subsequent releases.
Application operations benefit from MINT through immediate awareness at mobile app failures. They can quickly identify the source of issues, engaging the right organization so MTTR is decreased. Additionally, operations can better plan for Mobile growth I spotting usage patterns.
Product managers and business owners can benefit getting better insights into user behavior. Additionally, they can begin to think omni-channel of a better understanding mobile apps are used, and how they are used in context of non-mobile channels.
First Splunk App for Stream is a way of get wire data into Splunk Enterprise. By adding this comprehensive source of machine data, it enables you to extend Operational Intelligence use cases across IT security and the business. It is a software only solution with the ability that can be installed on VM on any host, it enables real-time insights into multi-cloud environments. And as such, it is easy to install anywhere on most of standard machines, it is a passive very efficient way to capture data.