The document summarizes a case study of migrating Java applications from Weblogic to vFabric Cloud Application Platform. It discusses the rationale for migration, including addressing issues with the previous platform around agility, reliability and costs. The key steps taken are defined, including defining scope, the new target platform, development processes, porting the applications through pattern translations, testing, and creating standardized deployments. The new deployment architecture consolidated applications across fewer VMs for improved scalability and manageability. Lessons learned focused on properly defining and limiting scope as well as performance testing.
Using Pivotal Cloud Foundry with Google’s BigQuery and Cloud Vision APIVMware Tanzu
Enterprise development teams are building applications that increasingly take advantage of high-performing cloud databases, storage, and even machine learning. In this webinar, Pivotal and Google will review how enterprises can combine proven cloud-native patterns with groundbreaking data and analytics technologies to deliver apps that provide a competitive advantage. Further, we will conduct an in-depth review of a sample Spring Boot application that combines PCF and Google’s most popular analytics services, BigQuery and Cloud Vision API.
Speakers:
Tino Tereshko, Big Data Lead, Google
Joshua McKenty, Senior Director, Platform Engineering, Pivotal
This document provides an overview of cloud computing and testing in the cloud. It discusses key aspects of cloud computing including pay-per-use models, virtual server pools, and various cloud deployment models. It then covers cloud service level agreements and their technical and commercial terms. The document outlines different strategies for testing in the cloud including automation, functional testing, and monitoring. It also discusses challenges like security and reliability and how defects are tracked. Overall the document is providing guidance on testing applications and infrastructure deployed in cloud environments.
IBM announces a family of Cloud Paks that provide developers, data managers, and administrators an open environment to quickly build, modernize, and deploy applications and middleware across multiple clouds. The Cloud Paks include containerized IBM software and open source components that can be easily deployed to Kubernetes and provide capabilities for lifecycle management, security, and integration with services. Cloud Paks simplify enterprise deployment and management of software in containers and provide a consistent way for organizations to move more workloads to cloud environments faster.
DevOps automation for Container based App DeliveryWaveMaker, Inc.
Modernization of IT and Container revolution
DevOps automation using containers
Lift and shift Apps into containers automagically.
Unified App delivery to Hybrid & Multi-clouds
Case study and Demo
CloudGenius offers an incremental and evolutionary process for migrating IT systems to the cloud that uses the (MC2)2 framework to evaluate alternatives and make complex decisions through multi-criteria analysis and decision making methods like AHP, and a prototype called CumulusGenius is available to demonstrate this migration process.
Integration architecture for the hybrid and multi-cloud enterprise
It is a given that most enterprises are now spread between on-premise and cloud resulting in a need to perform integration across this hybrid architecture. Furthermore, most customers are seeing, or at least predicting a multi-cloud architecture. Multiple clouds from multiple vendors, providing a variety of different platforms, which brings a whole new set of integration challenges.
We will look at how integration architecture has evolved from service oriented architecture to take advantage of cloud native technologies and microservices principles. We will also discuss how integration is affected by multi-cloud issues, what the typical resolutions are. Also available as webinar: http://ibm.biz/MultiCloudIntegrationArchitectureWebinar
Eseguire Applicazioni Cloud-Native con Pivotal Cloud Foundry su Google Cloud ...VMware Tanzu
Eseguire Applicazioni Cloud-Native con Pivotal Cloud Foundry su Google Cloud Platform (Pivotal Cloud-Native Workshop: Milan)
Fabio Marinelli
7 February 2018
Agile integration concepts help to move integration landscapes towards a more cloud native approach. This brings benefits such as improved productivity, deployment confidence, granular resilience, and more efficient use of human and computer resources.
Those following this path, will recognize it is a journey, not a single step, and we at IBM are moving our focus to one of the most critical parts of that journey – progressively automating your integrations. This refers to automation at multiple levels, from lifecycle automation (CI/CD), to operational automation to enable site reliability engineering practices. It reinforces the essential nature of the operational consistency brought by container platforms, to enable multiple integration capabilities to be administered in increasingly similar ways.
It also becomes increasingly clear that in this more decentralized and distributed world there is an increasing likelihood that multiple integration styles will be used alongside each other and often even in the same solution. This further heightens the importance of automation as there are so many moving parts to be deployed and administered. It is here that we see huge potential gains from the application of machine learning to further improve the level of automation.
Cloud testing refers to testing applications and services that are hosted in cloud environments. There are three types of clouds: private, public, and hybrid. Cloud testing provides benefits like reduced costs since resources are accessed on-demand. It involves testing applications deployed in clouds, testing the cloud infrastructure itself, and testing across multiple cloud environments. Key challenges of cloud testing include security, lack of standards, infrastructure limitations, and improper usage increasing costs. Existing research on cloud testing and software testing as a service is limited but focuses on test modeling, criteria for cloud applications, and commercial cloud testing tools and services.
Best Practices for Monitoring Your Cloud Environment and ApplicationsProlifics
This document summarizes a presentation on monitoring cloud environments and applications. The presentation discusses how monitoring requirements have changed in the cloud, with an emphasis on automation, standardization, and quick deployment. It outlines challenges in monitoring public clouds and requirements for cloud monitoring. The presentation then reviews IBM's monitoring offerings for private, IaaS, PaaS and SaaS cloud models, including IBM SmartCloud Application Insight, IBM SmartCloud APM, IBM Service Engage, and IBM Pure Application System Monitoring. It concludes with a demo of IBM SmartCloud Application Insight.
Webinar presentation October 22, 2015.
The model behind Platform-as-a-Service (PaaS) is to provide a platform for customers to develop, run, and manage web applications without needing to build or maintain the infrastructure, which can reduce costs while increasing flexibility and speed-to-market.
In the CSCC deliverable, Practical Guide to Platform-as-a-Service, learn how to use PaaS to solve business challenges, specifically:
- Definition of PaaS, the benefits of using PaaS, and examples of PaaS offerings
- Applications best suited for PaaS and the considerations for architecture, development, and operations
- Recommendations for the best use of PaaS services
Download the deliverable: http://www.cloud-council.org/resource-hub
Understand the future of software development in the cloud with the azure app...Jeremy Thake
Organizations reported a 466% return on investment and $5.91 million in net present value from shifting application development and deployment from Azure IaaS to Azure PaaS. This shift saved 80% of IT time and resulted in 50% faster service deployment times. Azure PaaS provides pre-built infrastructure and services including Azure App Service, Azure Functions, and Azure Service Fabric to simplify development and allow developers to focus on their application code rather than infrastructure management.
The Open Data Center Alliance Cloud Maturity Model (CMM) provides an end-to-end visualization for how the use of cloud in the enterprise develops over time (adoption roadmap) and how the enterprise’s ability to adopt cloud-based services within defined governance and control parameters increases.
As it matures, the use of cloud becomes more sophisticated, comprehensive, and optimized. The CMM plots the progression of structured cloud service integration from a baseline of no cloud use through five progressive levels of maturity.
This presentation will walk attendees through the maturity model that the ODCA has developed and how they can apply this model to their organization creating a comprehensive plan to fully integrate cloud into their operations.
Cloud migration process simplified, innovate vancouverInnovate Vancouver
The document outlines a 4 phase cloud migration process that includes pre-migration planning, application analysis and architecture design, migration execution and testing, and post-migration operations. Key steps involve forming a migration team, analyzing applications, designing cloud architecture, executing the migration, testing and validating, training staff, and capturing lessons learned to complete the transition to cloud operations.
CRM Trilogix; Migrating Legacy Systems to the CloudCraig F.R Read
This document discusses migration to the cloud and provides an overview of the cloud migration process. It notes that while currently only 5% of organizations have migrated half their applications to the cloud, that number is expected to increase to 20% by the end of the year. The document then outlines challenges, approaches, and phases of cloud migration including planning, deployment, and optimization. Specific migration use cases and recommended AWS services are also provided.
Continuous Delivery on IBM Bluemix: Manage Cloud Native Services with Cloud N...Michael Elder
Development teams want to move quickly. Operations teams want to move forward with effective risk management. How do you balance these concerns? With IBM Continuous Delivery for Bluemix, developers are empowered to deliver changes at cloud speed, while release managers can establish policies that ensure compliance with standards. Promotions can be automated all the way to production while enforcing team policies around test coverage and automated test success. And of course, environment inventories are always just a click away. In this talk, you’ll learn how to enable your enterprise teams to deliver like a startup, without violating corporate regulations like separation of duties.
This document summarizes an IBM presentation on emerging cloud migration approaches including AI planning, chatbots, and beyond. The presentation discusses using AI and machine learning to improve various phases of the cloud migration process, such as workload classification, selection, and planning. It also covers automating migration tasks and using conversational interfaces like chatbots to assist throughout the lifecycle. The document provides examples of how these approaches have helped customers successfully migrate to the cloud.
VMworld 2013: Tools and Techniques to Manage the Hybrid Cloud Environment VMworld
VMworld 2013
Lily Chang, VMware
Amit Pathak, iGATE
David Wright, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This document discusses IBM's hybrid multicloud platform and digital transformation. Some key points:
- IBM's hybrid multicloud platform is founded on Red Hat technologies like Red Hat Enterprise Linux and Red Hat OpenShift which allow applications to be built once and deployed anywhere across public clouds, private clouds, and on-premises.
- The platform provides consistent management, security, and services across heterogeneous cloud environments from different vendors through an open, standards-based approach.
- A case study describes how Deutsche Bank used Red Hat solutions to build an application platform that streamlined development, improved efficiency, and allowed applications to be developed 2-3 weeks instead of 6-9 months.
DevOps within the Hybrid Cloud Deploying to the VMware Platform on the IBM CloudMichael Elder
This document discusses deploying VMware workloads to the IBM Cloud platform using VMware on IBM Cloud. Key points include:
- IBM Cloud allows customers to easily move existing VMware workloads from on-premises data centers to IBM Cloud on a common platform.
- IBM Validated Design simplifies deployment of VMware Cloud Foundation on IBM Cloud infrastructure consisting of bare metal servers, VMware software, and automated lifecycle management.
- The partnership between IBM and VMware enables customers to achieve a consistent management and security model across their hybrid cloud with familiar VMware tools.
Migration from Weblogic to vFabric Cloud App PlatformVMware vFabric
The document discusses migrating Java applications from Weblogic to vFabric Cloud Application Platform. Key points include:
- Migrating provides business benefits like increased agility, reliability, and reduced costs through lower deployment downtimes, scalability, hardware footprint, and Op-Ex costs.
- The migration process involves defining scope, the new target platform, development platform, porting applications through pattern translations and testing, and creating standard deployments.
- Benefits realized included operational consolidation through a reduced hardware footprint of 66%, 15% reduced Op-Ex costs, and improved deployment agility through 5 times reduced downtimes.
VMworld 2013: How to Replace Websphere Application Server (WAS) with TCserver VMworld
VMworld 2013
Kaushik Bhattacharya, Pivotal
Michel Bond, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: Moving Enterprise Application Dev/Test to VMware’s Internal Pri...VMworld
VMworld 2013
Thirumalesh Reddy, VMware
Padmaja Vrudhula, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Gain Insights, Make Decisions, and Take Action Across a Streamlined and Autom...Arraya Solutions
This document provides an overview and summary of vRealize Automation and vRealize Operations solutions. It begins with an agenda and discusses how these solutions can help organizations address challenges around accelerating service delivery times, gaining insights across hybrid cloud environments, and ensuring quality of service. New features of vRealize Automation 6.2 like enhanced integration with vRealize Operations and an admin-friendly CLI are highlighted. The document also reviews the key capabilities and benefits of vRealize Operations for intelligent operations, predictive analytics, compliance management, and visibility across private and public clouds.
Application Modernization with PKS / KubernetesPaul Czarkowski
This document discusses strategies for modernizing applications and replatforming them using Project Kubernetes Service (PKS). It outlines how companies have different options for packaging and running workloads, such as using containers, microservices, serverless functions, and monolithic applications. PKS aims to provide the right runtime for each workload type. The document compares container orchestrators, application platforms, and serverless functions, noting that PKS aims to push workloads higher in the platform hierarchy for more flexibility and less enforcement of standards while lowering development complexity and improving operational efficiency. It provides recommendations for getting started with migrating workloads to PKS, such as lifting and shifting applications with minimal modernization, leveraging platform capabilities, and fully modernizing
Enterprise DevOps is different then DevOps in startups and smaller companies. This session how AWS/CSC address this. How AWS IaaS level automation via CloudFormation, UserData, Console, APIS and some PaaS OpsWorks/Beanstalk is complimented by CSC Agility Platform. CSC Agility adds application compliance and security to the AWS infrastructure compliance and security. CSC Agility allows for the creation of architecture blueprints for predefined application offerings.
VMworld 2013: VMware and Puppet: How to Plan, Deploy & Manage Modern Applicat...VMworld
VMworld 2013
Nigel Kersten, Puppet Labs
Becky Smith, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
vCloud Automation Center and Pivotal Cloud Foundry – Better PaaS Solution (VM...VMware Tanzu
David Benedict - Member of Technical Staff, VMware
Cornelia Davis - Platform Engineer, Cloud Foundry, Pivotal
Vipul Shah - Director of Product Management, VMware
vCloud Automation Center provides powerful capabilities for policy-based orchestration of complex infrastructure and application deployments. A Platform as a Service (PaaS) such as Pivotal CF, built on the open-source Cloud Foundry, presents a set of abstractions and capabilities that focus on the application implementation and the run-time services it will leverage.
The value of a PaaS installation is equally driven by the set of application-centric capabilities provided, such as performance monitoring or logging, and by the set of services that can easily be integrated into an application; exposing the offerings in the vCloud Automation Center services catalog for leverage by apps deployed into Pivotal CF allows an enterprise faster time to value. And a vCloud Automation Center user can model system deployments, automating infrastructure provisioning and software deployments; this modeling is equally valuable even when the targets of the orchestrations are the PaaS abstractions of applications and services.
These products are very complementary and we’ll show you how. Understand how the combined vCloud Automation Center / Pivotal CF solutions provide the basis for a comprehensive PaaS solution. See a demo of and roadmap for the integrated solution. Learn how to use vCloud Automation Center to model applications for deployment into Pivotal CF and how to draw vCloud Automation Center services into Pivotal CF.
After a brief overview of both products, we will describe the capabilities and derived value of the joint solution that will have early access availability at the time of the conference.
Hybrid Cloud Orchestration: How SuperChoice Does ItRightScale
This document discusses SuperChoice's hybrid cloud orchestration approach. It summarizes:
- SuperChoice migrated over 200 applications to hybrid clouds using a Cloud Management Platform for orchestration.
- They took a "fix by rebuild" approach to automate deploying entire environments from source scripts.
- Significant challenges included addressing technical debt, cultural change for staff, and ensuring portability across clouds.
- Lessons learned were that automation is critical, people issues are the biggest barrier, and a methodical approach worked best for the transition.
Continuous Delivery for cloud - scenarios and scopeSanjeev Sharma
Cloud is both a catalyst and an enabler for DevOps. Having the flexibility and the services and capabilities provided by the Cloud lowers the barrier to adoption for organization looking to adopt DevOps. Hence, allowing them to achieve the business goals of Speed, Business Agility and Innovation.
This webinar will explore the impact of DevOps on using the Cloud as a Platform as a Service and vice versa. It will explore the different use cases of DevOps that are enabled or enhanced by the Cloud platform, and the different 'scopes' of adoption by organizations adopting Cloud and DevOps in an iterative manner.
Presentation business critical applications in a virtual envsolarisyourep
This document discusses virtualizing business critical applications. It begins by explaining why IT operations and application owners sometimes differ on virtualization, with the former wanting infrastructure efficiency and the latter concerned about performance and support. The document then shows trends of increasing virtualization for various workloads. It outlines operational benefits and new features in vSphere 5 that further reduce barriers to 100% virtualization. The rest of the document focuses on how virtualization can improve quality of service, availability, and time-to-market for applications through features like dynamic resource allocation, high availability, disaster recovery, and faster test/development cycles. It also addresses specific application information and licensing models. In the end it provides recommendations for getting started with virtualizing critical applications.
DevOps and Application Delivery for Hybrid Cloud - DevOpsSummit sessionSanjeev Sharma
The world is Hybrid. Organizations adopting DevOps are building Delivery Pipelines leveraging environments that are complex - spread across hybrid cloud and physical environments. Adopting DevOps hence required Application Delivery Automation that can deploy applications across these Hybrid Environments.
This document summarizes VMware products for automating software defined data centers and applications. vCloud Automation Center allows modeling applications and deploying them across multi-cloud infrastructures. It provides a unified service catalog for infrastructure, applications, and other services. vCenter Operations provides visibility and management across multiple clouds and aids in identifying performance issues. It also enables config and compliance governance across clouds. Hyperic monitors custom web application performance. vCenter Log Insight is a log management platform that can ingest various log and metric data at scale for analysis. It integrates with vCenter Operations for correlating performance and log data.
VMworld 2013: Architecting the Software-Defined Data Center VMworld
VMworld 2013
Aidan Dalgleish, VMware
David Hill, VMware
Kamau Wanguhu, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Presentation advanced management – the road aheadsolarisyourep
The document discusses VMware's approach to management and automation for IT organizations. It introduces several new management suites from VMware that aim to help IT operate more like a business and deliver value to the business. The suites discussed are the vCenter Operations Management Suite, vFabric Application Management Suite, and IT Business Management Suite. The suites are designed to simplify management, increase automation, provide visibility across infrastructure and applications, and help IT articulate its value using business metrics and language.
The slides from a recent webinar I had delivered regarding why IT Portfolio Modernization is important for a successful Business Process Transformation.
VMworld 2013: Extend VMware’s Cloud Automation Solution with vCenter Orchestr...VMworld
VMworld 2013
Terry Lyons, VMware
Meena Nagarajan, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
VMworld 2013: Practicing What We Preach: VMware IT on vCenter Operations Mana...VMworld
VMworld 2013
Shreekant Ankala, VMware
Sreekanth Indireddy, VMware
Prafull Kumar, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Similar to Cap2194 migration from weblogic to v fabric - cloud application platform (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/intels-approach-to-operationalizing-ai-in-the-manufacturing-sector-a-presentation-from-intel/
Tara Thimmanaik, AI Systems and Solutions Architect at Intel, presents the “Intel’s Approach to Operationalizing AI in the Manufacturing Sector,” tutorial at the May 2024 Embedded Vision Summit.
AI at the edge is powering a revolution in industrial IoT, from real-time processing and analytics that drive greater efficiency and learning to predictive maintenance. Intel is focused on developing tools and assets to help domain experts operationalize AI-based solutions in their fields of expertise.
In this talk, Thimmanaik explains how Intel’s software platforms simplify labor-intensive data upload, labeling, training, model optimization and retraining tasks. She shows how domain experts can quickly build vision models for a wide range of processes—detecting defective parts on a production line, reducing downtime on the factory floor, automating inventory management and other digitization and automation projects. And she introduces Intel-provided edge computing assets that empower faster localized insights and decisions, improving labor productivity through easy-to-use AI tools that democratize AI.
What's Next Web Development Trends to Watch.pdfSeasiaInfotech2
Explore the latest advancements and upcoming innovations in web development with our guide to the trends shaping the future of digital experiences. Read our article today for more information.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
AI_dev Europe 2024 - From OpenAI to Opensource AIRaphaël Semeteys
Navigating Between Commercial Ownership and Collaborative Openness
This presentation explores the evolution of generative AI, highlighting the trajectories of various models such as GPT-4, and examining the dynamics between commercial interests and the ethics of open collaboration. We offer an in-depth analysis of the levels of openness of different language models, assessing various components and aspects, and exploring how the (de)centralization of computing power and technology could shape the future of AI research and development. Additionally, we explore concrete examples like LLaMA and its descendants, as well as other open and collaborative projects, which illustrate the diversity and creativity in the field, while navigating the complex waters of intellectual property and licensing.
What Not to Document and Why_ (North Bay Python 2024)Margaret Fero
We’re hopefully all on board with writing documentation for our projects. However, especially with the rise of supply-chain attacks, there are some aspects of our projects that we really shouldn’t document, and should instead remediate as vulnerabilities. If we do document these aspects of a project, it may help someone compromise the project itself or our users. In this talk, you will learn why some aspects of documentation may help attackers more than users, how to recognize those aspects in your own projects, and what to do when you encounter such an issue.
These are slides as presented at North Bay Python 2024, with one minor modification to add the URL of a tweet screenshotted in the presentation.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Cap2194 migration from weblogic to v fabric - cloud application platform
1. CAP2194 - An end to end case studyMigrating Java Applications from Weblogic to vFabric Cloud Application Platform “Start your Journey to PaaS”Thirumalesh Reddy, VMwareRama Kanneganti, HCL
2. Migrating from Weblogic to vFabric - Opening RemarksCloud Application Platform – Start your journey to PaaSMigration has real business benefitsIncreased agility and reliabilityReduced deployment downtimesScales to future growthReduced Cap-Ex and Op-Ex costsReduced Hardware FootprintHighly adaptive to business and customer needs
3. Migrating Java Applications from Weblogic to vFabric AgendaRationale for migrationSteps in migrationTaking advantage of the new platformConcluding Remarks
4. Problem ContextEnterprise custom Java applications are expensive and complex to build, deploy, scale and manageTypical problems:DevelopmentObsolete toolkits and no clear choice of toolkit (Struts, anyone?)No fully integrated stack resulting in lack of control and risk OperationalLack of support for developer-friendly toolsLack of industry and communityIndustryShortage of skill setsFragmented market space leading to uncertainty.VMware problem: Higher % of downtime leading to poor customer satisfaction; and high cost of development and operations
5. More details on the VMware problemBusiness expects agility, reliability and lower costs…… but I have performance, instability, manageability monitoring and analyzing issues with current platform!We used to own the servers running our apps. Now we don’t know if the apps can get the resources they need to run well.We’ve spent a fortune on monitoring tools. Why are our end users still the first to know about performance problems and why do they take so long to solve?VMware IT… what is needed is a scalable, robust, and manageable platform that is built on modern tools and technologies!faster service execution at lower cost…
6. Why migrate?General solution to our problem: Migrate to a new platform.Constraints: Retain functionalityAddress perceived issuesCreating a business caseDue to platform issues there is a high cost for downtimes ad-hoc fixing of problems not having the right monitoring toolsdevelopment with a disparate technology stack. hardware costs to scale with complex resources intensive application serversMigration Costsnew platform creationcode migrationcost of testing
7. Sample Business case templateMigration of technology stacks is an IT askChallenge: Creating a business case with quantitative and qualitative benefits/outcomes with supporting data and a cost model with the projected cost savings. Core value proposition: Increase the availability and resiliency and reduce development and operational costs.Our Business case:Quantitative:Reduce ongoing development and operational costs by 15%Reduce hardware costs by 30%Reduce downtime support requests by 25%Reduce software costs by 40%Qualitative:Improve reliability, availability and scale of customer-facing portals Standardize technology stack with a full fledged integrated platformIncrease agility, productivity and reusabilityEmbrace open source with abundant skill-set availabilityStrategic:Cloud Application Platform – Start the journey to the PaaS
8. Steps in migrationDefining the scopeDefining the new target platformDefining a development platformSteps in porting the applicationPattern translationsCompatibility library creationTesting the applicationCreating standard deploymentRun books, training of Ops resourcesEnhancing the platformOperational enhancementsEmbark the PaaS journey
9. Defining the scope of migrationFactors to considerChanges in architecture?Functionality changes?Code refactoring?Elimination of dead code, general clean upModularization to support future code changesCode changes to follow the new patterns for the new platformFixing known bugs & metrics for code qualityTypical choices:Migrate the code as is: minimal changesAs part of migration make changes that address current issuesGuideline: Use business case to create a migration strategy Define phases and success criteria Guided by business case and visibility requirements
10. Our scopeSummary: We had to change the architecture to support the business case, and the code to support those architecture changes; and code changes to Spring paradigm such as IoC. No new functionality.
11. Defining the new target platformGeneral frameworkWhich components of vFabric to use?Which third party (legacy components) to integrate?Other improvement choicesCaching (Gemfire)Monitoring and Metrics (Hyperic)Operational improvements: vCloud director, vCOWhat we did:Full fledged vFabric stack to unlock the full potentialSpring WS for the back-end service IntegrationsSpring Security for securing the appsOptimized application performance using GemfireStability and availability with fault-tolerant cluster deployment on light-weight tc ServersProactive monitoring, complete diagnostics and mgmt using HypericApplication workloads provisioning and mgmt using vCloud Director and vCO
12. New Target platformTarget Custom Web Applications PlatformPresentation – AJAX, CSS, JQueryFrameworks & ToolsPortals/Widgets – Spring Web MVC, Spring Web Flow, Spring Framework 3.0Web Service Integration – Spring WSSecurity – Spring SecurityPersistence – HibernateAudit/Logging - ApacheLDAP – Spring LDAPCaching – GemfireAlert Mgmt - CustomCommon Application ServicesPlatform ServicesGemFireApache Web ServerSpring tc App ServerHypericCloud Infrastructure and ManagementCloud Application Platform – Start journey to PaaS
13. Defining a development platformUsing a virtualized devenvSTS and other toolsLocal installations of vFabric tc Server, Gemfire, Hyperic Processes for setting up and self-provisioning the virtual environmentsWhat we did:Used primarily STS as the development IDE, Maven build manager, Bamboo build serverSonar as the development quality management platformAutomated self-provisioning of fully configured Dev IDE (STS) and deployment environments (vFabric tc Server, Gemfire, Hyperic) using vCloud Director and vCenter Orchestrator.
27. Multiple Logging frameworks were used for application Logging.Pattern examplesUse of Spring WS InterceptorUsage of “Member-Variables” in Controllers - vFabric: Convert to Session variables in Spring MVCPersisting Application Logging to DatabaseJSON Support - leveraging Spring-Jackson JSON LibrariesController AnnotationsBeehive Pageflow Annotations converted to Spring MVC RequestMappingsNetuix tag libraries converted to MVC and JSTL Tag libraries.Authorization – Spring security; configuration drivenException and Error Handling
28. TestingFunctional testing:Re-used the existing automated functional tests: 90% of tests are these.Modification to support new front end enhancements (Middleware offline mode is new, tc Server session fail-over is new)Operational testingNew performance test cases needed to be added.New test cases to test offline modevFabric fault-tolerance capability.the Gemfire cachingthe instance provisioning predictability
29. Deployment ProcessPain point addressed: Long downtime for upgrades WL Server deployment times were 20 minutes per node.WL Server startup times were 20 minutes per node.This lead to at-least 40 min added downtime if all portals were to be handled in parallel.Upgraded to Maven 2Easier management of componentsSimplified build processNew deployment scripts writtenAnt and Shell scripts for file movement and managementBamboo plugins for release management
30. Deployment ArchitectureEarlier pain points:14 portals on 42 VMs. 28 managed nodes and 14 admin nodesUnequal load – rarely used apps also consumed a VM.Lack of proactive Monitoring of Application Server ResourcesNo centralized management; Maintenance and support troublesNew deployment architectureCloud Ready Application Platform – Journey to the PaaS Consolidated to 14 VMs from 42 VMsCritical apps: 4 nodes; rest 2 nodes each + 2 admin/Hyperic nodes + 2 nodes GemfireApp Clusters by SLA, UsageCustomer facing critical; Customer facing non-critical; Non web apps; Internal apps
32. Deployment ArchitectureEasier maintainabilityHyperic reduced the need for admin nodesOne Admin Cluster used to manage multiple Application ClustersProactive Monitoring and AlertingConnection Pools, Threads, Thread-Locks, Heap are monitored constantly.Optimized PerformanceGemFire in-memory data grid improved performanceOptimized the JVM heap usage on tc Servers using Gemfire client – server cachingProvided tc server session failoverProvided a high performance, scalable, and fault tolerant cache solutionProvide application level, user level caching
33. Lessons learnedLockdown the scope and avoid functionality scope creek.Be prepared to re-factor code as there is no one-to-one pattern translations for all the patternsLockdown the target platform components and avoid introducing new components.Define usage patterns of new frameworks, components for faster on-ramp and code quality. Define the criteria and the scope of different caching levels usage for optimal performance. Allocate large amount of time for performance tests as tuning of new platform is an iterative process.Minimize business UAT test time as no functionality change involved and compliment with automated regression testing.
40. Improved Security using Spring security and Spring LDAP* Data sampled over 3 month averages before and after deployment
41. Concluding RemarksCloud Application Platform ready – Start the journey to PaaSMigration to new platform need not be an IT exercise – it has real business benefits. We presented the aspects of the business case.The ROI can substantially be increased by addressing the non-functional aspects of the application supported by the new platform. We showed how we used increased virtualization, modified deployment architecture, monitoring to reduce the costs and increase the reliability.More business impact can be made by adding flexibility to the application provided by vFabric platform. We shared some architecture patterns we used to make the application more flexible – JSON support, Spring WS etc.ThankYou!
Most people do not migrate because they see it only as IT optimization. But, with proper business case, they can incorporate aspects that will deliver business benefits as well.
This is the story of how VMware successfully migrated its portals used in customer purchases and support to Springsource stack addressing the technical issues while responding to the needs of business.
Any enterprise that built a complex piece of machinery to support their applications must be feeling the same way. They invested a lot in making a complex setup work, but it cannot keep with the times. The trend is now how organizations are focusing on what brings value to them. It means, cloud and PAAS to address the technical needs, while IT focuses on delivering what is needed for the business.
Original applications - Weblogic portal (Beehive) with local enhancementsIssuesPerformance: was not scaling with increasing number users and amount of data.Difficulty of managing: Adding a feature took long time to add; cleanup; un-deploy and deploy resulting in longer downtime.Application instability : Frequent reboots were requiredDifficulty of analysis: No built in application monitoring; external monitoring did not provide internal details or proactive alerts. What was needed:A scalable, robust, and manageable platform that is built on modern tools and technologies.
There was a debate on functionality improvement. We resolved it by saying: We want to enable resilient architecture – even if the components went down the users should be able to conduct their business.Whatever we need to change from functionality perspective to achieve the above goal, we will do.Any cleanup that is easy and supports the IT goals (HA, maintainability etc), we will do.Other functionality changes will be entertained only on the need basis (In general, no changes).
The business case template includes:ROI realization: When, what will change and how much it will save.Assumptions: What the current assumptions about the cost of downtime etc.Eventually, it provides a framework for us to create a plan and a time table.
These are standard steps. The main interesting part is lot of enhancements we can bring to operations, once we moved to the platform.
One major Constraint / requirement was, all of the user bookmarked URLs should work after migration with out any change. For example: On Weblogic, Login page was www.vmware.com/mysupport/login.portalThe same URL should work on Spring TcServer as well.
We went with the standard reference platform of spring source. It was easy; it was proven; and it supported our needs.
The details are in the slide itself. Why we did these changes:Moving to standardization and modern technologies: RMI is tough to debug and support.Reducing the tight coupling: Even if some components (some of those are not in our hands – they have different downtimes and different availability schedules) are down, the application still works.Non-functional enhancements: Caching to use Gemfire. Deployment changes: Cleaning up earlier disparate standards as well as introducing new standardized tools.
We could have considered emulation library as well, but in the long run, it is more painful than it is worth. We are not thinking this code as legacy – it is a living code and as such deserves no such short term fixes.
Pattern – Usage of “Member-Variables” in ControllersvFabric: Convert to Session variables in Spring MVC.@SessionAttributes(“sessionAttributeName”)public class MySessionVarClass {... ... }void myRequestMappingMethod(@ModelAttribute(“sessionAttributeName”) MySessionVarClassmsvc) {}Pattern – Persisting Application Logging to DatabaseDue to the use of Multiple logging frameworks, a multiple implementation were usedStandardized on Log4J.Used enhanced version Log4JDBAppender.Pattern – JSON SupportWL – no OOB support for JSON Serialization and DeserializationJSPs were used for de-serializing to JSON.Define object model on the Controller layer, and de-serialize them to JSON leveraging Spring-Jackson JSON Libraries.public @ResponseBodyOutputClasssaveSomething(@RequestBodySomeInputClass input) { //Little bit of logic to call Service Layer .. .. .. //Some more logic to convert service response to UI Object return output;}Pattern – Controller AnnotationsBeehive Pageflow Annotations converted to Spring MVC RequestMappingsNetuix tag libraries converted to MVC and JSTL Tag libraries.Pattern – AuthorizationWL – Container Authorization was leveragedSpring Security with Configuration based Access control and Role-User MappingPattern – Exception and Error HandlingWL – no common mechanism for error handlingClassify exceptions into Business and System Exceptions and populate appropriate codes. Use Spring MVC Exception Resolver to display appropriate error messages / pages
Maven 1 lacked support for JSP tagsLeverage Project Lifecycles – QA is interested in running automated integration tests only!Transitive dependencies – Large number of Open Source Libraries were used – difficult to manage with Maven1So, we migrated to maven2
The VMware Cloud Application Platform combines the Spring Framework for building new applications together with a complete set of Application Platform Services required to run and manage these applications.[CLICK] Spring Framework: Spring is a comprehensive family of developer frameworks and tools that enable developers build innovative new applications in a familiar and productive way while enabling the choice of where to run those applications, whether inside the datacenter or on private, hybrid, or public clouds. Spring enables developers to create applications that: Provide a rich, modern user experience across a range of platforms, browsers and personal devices Integrate applications using proven Enterprise Application Integration patterns, including batch processing Access data in a wide range of structured and unstructured formats Leverage popular social media services and cloud service API’s[CLICK] VMware vFabric: VMware vFabric is a comprehensive family of application services uniquely optimized for cloud computing including lightweight application server, global data management, cloud-ready messaging, dynamic load balancing and application performance management. [CLICK] The products behind these services include: Lightweight Application Server: tc Server, an enterprise version of Apache Tomcat, is optimized for Spring and VMware vSphere and can be instantaneously provisioned to meet the scalability needs of modern applications Data Management Services: GemFire speeds application performance and eliminates database bottlenecks by providing real-time access to globally distributed data Cloud-ready Messaging Service: RabbitMQ facilitates communications between applications inside and outside the datacenter Dynamic Load Balancer: ERS, an enterprise version Apache web server, ensures optimal performance by distributing and balancing application load Application Performance Management: Hyperic enables proactive performance management through transparent visibility into modern applications deployed across physical, virtual, and cloud environments Policy-Driven Automation: Foundry is the tentative name for a new offering still under development that is focused on policy-based automation of application and platform configuration and provisioning tasks.