Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

BES3141 - ClassHandout 3141 Shinde AU2024 - 1727476478261001mrud

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

3141

Optimizing Revit Structural Intelligent BIM models


with LLM’s & Autodesk Platform Services
Abhishek Sanjay Shinde
TYLIN

Learning Objectives
• Constructing your LLM use-case with API offerings from Autodesk Platform
Services.
• Using Revit API methods to query into Neo4j Revit AddIn for Revit CSV generation
• Preparing the Revit CSV data into a Neo4JGraph RAG LLM application to Query
Neo4J database.
• Fine-tunning techniques which AEC power users can use for their LLMs & RAG
applications.
• Structural Optimization workflows besides Galapagos.(Multi-objective optimization).
• Langchain(Concept of Chains and Agents inside Langchain).
• Resources related to RAG, Agentic workflows, Knowledge Graphs, GraphRAG for
AECO.

Description
Large Language Models (LLMs) are advanced AI systems trained on vast amounts of text data
to understand and generate human-like language. They predict and generate text, images,
videos based on input text prompts. These models are capable of tasks like translation,
summarization, and conversation with applications like anomaly detection, design generation
capabilities for augmenting architectural workflow, etc. The underlying computational model for
these LLMs are artificial neural networks(ANNs) that utilize the transformer architecture
invented in 2017. Based on transformer architecture, we have several LLMs on our devices,
laptops and workstations like GPT-4 used by OpenAI’s ChatGPT, BERT, Meta’s LlaMA2,
Google’s PaLM and Mistral 7B.
The recent ubiquitous use of generative techniques like Midjourney, Stable-diffusion, Comfy
UI,etc for early stage architectural design, façade design and also OpenAI’s ChatGPT has
opened the doors for AI application in AEC . However, its use for structural design &
engineering is at infancy stage. Today, we investigate a prototype application workflow on how
LLMs can be used in conjunction with structural optimization, query our BIM models while
keeping Computational design workflows in the loops. There is a step by step process on how
one can prepare sample Revit BIM models or for their APS LLM extension & augment the LLM
workflow by finetuning it with a GraphRAG chatbot which serves a finetuning pipeline. This
handout needs to be used in conjunction with the Presentation for one to prepare their data
workflows for further processing. We will also look at various fine-tunning techniques which are
used for fine-tunning application and some precedents studies done in RAG, GraphRAG & BIM
as Graph. Code for the APS LLM extension, sample models like Revit models, Rhino &
grasshopper files can be accessed on Github URL mentioned here.

Page 1
Speaker

Abhishek Shinde is a licensed Architect in India, with a wealth of experience as


Computational Designer, BIM (Software developer | Application developer) and aspiring
Machine Learning Engineer.

He is seeking expertise in AEC-ML research areas, which includes CAD-LLM, deep


learning-aided computational design workflows, optimization, and graph algorithms. At
TYLin Silman Structural Solutions, Abhishek is dedicated to improving structural
engineering workflows by developing add-ins and extensions for BIM Automation and
Structural analysis automation inside Rhino and Revit software eco-system. He is also
leading the effort within the Building Sector of TYLin to develop a Large Language
Model Pipeline for assisting engineers. He is also actively involved in developing
computational design workflows in open-source AEC software ecosystem like Speckle,
Open BIM, etc. to name a few. Before joining TYLin, he has worked as a Façade
designer at an award-winning Façade manufacturing firm “Island Exterior Fabricators
LLC” where he was involved with a team of computational design experts to develop a
computational design workflow for digitally fabricating unitized curtain wall systems for
several commercial projects in Boston and New York. Apart from his work experience in
BIM and Computational design, he has exhaustive experience in advanced digital
fabrication and robotic aided fabrication.

His ongoing independent research targets the extraction of granular AECO metadata to
achieve “One Computational Knowledge Graph Data Model (OCKGDM)”.
Abhishek's ambition is to become a leading AEC Machine Learning engineer, driving
innovation in DfMA, robotic fabrication, advanced Computational design workflows.

Page 2
Constructing AEC Business case using LLM for :
[Your firm]:_______________________________ [Your name]:___________________

LLMs in AECO: Application & Opportunities with Autodesk Platform Services

Autodesk Platform Services (APS), formerly known as Forge, offers a wide range of APIs to
help developers create custom business solutions. Use check box’s to compare what APIs can
be used with LLMs for your business case. They are grouped into two parts

Here are some of the key API offerings from Autodesk dated 20th September 2024:

AEC DATA ( ARCHITECTS, STRUCTURAL ENGINEERS, MEP, CONTRACTORS, FABRICATORS, AECO STUDENTS)

• ☐ AEC Data Model API: Access granular design data directly from the cloud.
• ☐ Data Visualization API: Visualize custom data on design models to create Digital
Twin solutions.
• ☐ Data Management API: Access and manage data from BIM 360 Team, Fusion
Team, and other services.
• ☐ Data Exchange API: Sharing of design data across different applications like Revit,
Rhino and Inventor without the need to import and export entire design models
• ☐ Model Derivative API (translate design from one CAD format to other)
• ☐ Design Automation API: Automate repetitive tasks and run scripts on design files in
the cloud.
• ☐ Tandem Data API (Digital twin: read, write, properties, assets)
• ☐ Manufacturing Data Model API: Access and store manufacturing data on cloud.
Allows to read, write, and extend data and access BOM, integrate with ERP system.
• ☐ Reality Capture API (Photogrammetry capability: RCM, OBJ, RCS, GeoTIFF).

CLOUD, AUTHENTICATION AND MANAGEMENTS (BIM DEVELOPERS, CLOUD MANAGERS, SOFTWARE DEVELOPER)

• ☐ BIM 360 API: Integrate with the BIM 360 platform to extend its capabilities in the
construction ecosystem.
• ☐ Autodesk Construction Cloud API: Integrate with the unified Autodesk Construction
Cloud platform.
• ☐ Viewer SDK API: Render 2D and 3D model data within a browser: AutoCAD, Fusion
360, Revit).
• ☐ Token Flex API: Manages and tracks usage of Autodesk software licenses, helping
organizations optimize their software investments.
• ☐ Authentication API: Generate tokens using OAuth 2.0 to authenticate requests
made to APS APIs.

Page 3
THE LLM USE CASES ALONG WITH AUTODESK’S API’S WHICH YOUR FIRM WOULD EXPLORE:

1. Automated Design and Drafting: Using customized Prompt which links to a detailed
drawing, LLMs can speedup BIM Detailing and drafting process. Markup Automation,
Data Labelling using Vision-Language Models
2. Project Management Tasks: Using QA chatbot style LLMs can assist in tasks like
automated scheduling, resource allocation.
3. Cost Estimation: By analyzing Historic data, LLMs can suggest cost reduction
mechanism via smart algorithms.
4. Risk Management: LLMs can help eliminate risks by avoiding delays and issues during
different phases of construction. LLM’s for RFI, Submittals & Construction drawing
summarization.
5. BIM Integration: LLMs apart from automated design and drafting, can help
6. Regulatory Compliance: Avoiding legal compliance and building code compliance of
the BIM models based on the work.
7. Customer Interaction: Project Status, deadlines, design options and timelines
8. Material Selection and Optimization : Suggesting materials for reducing cost and carbon
emissions and improved project quality
9. Sustainability Analysis : Evaluate total environmental impact by analyzing the flow of
data , materials and energy-usage.
10. Training and Knowledge Management: Creating Knowledge Base for construction
professionals.
11. Co-design and Optioneering: Co-design with architects, consultants and easy Query
of the AEC Data Model Design Ranking & Optioneering for sustainable design &
Embodied carbon monitoring.
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________

Page 4
Limitations of Scalability of Large Language Models(LLMs)

Scaling Large Language Models is currently an active area of research in Machine learning
community and it involves addressing several key challenges and exploring various research
areas. Here are some of the most promising enhancements and research directions:

1. Model Efficiency:
a. Model Compression: Techniques like pruning, quantization, and knowledge
distillation are being developed to reduce the size and computational
requirements of LLMs without significantly compromising their performance1.
b. Sparse Models: Implementing sparse architectures where only relevant parts of
the model are activated for specific tasks can improve efficiency.
2. Handling Long Contexts:
a. Extended Context Length: Research is focused on enabling LLMs to handle
longer sequences of text, which is crucial for tasks requiring extensive context,
such as document summarization and complex question answering2.
3. Multimodal Capabilities:
a. Integration of Multiple Modalities: Future LLMs will likely incorporate text, images,
audio, and possibly other sensory inputs to create more comprehensive and
versatile models3.
4. Domain Specialization: Fine-Tuning for Specific Domains: Tailoring LLMs for specific
industries like healthcare, finance, and law can improve their accuracy and relevance in
those fields3.
5. Real-Time Adaptation:
a. Dynamic Learning: Developing models that can adapt in real-time to new
information and user feedback will make LLMs more responsive and accurate.
6. Ethical and Fair AI:
a. Bias Mitigation: Ongoing research aims to reduce biases in LLMs and ensure
they produce fair and ethical outputs. Techniques like Reinforcement Learning
from Human Feedback (RLHF) are being explored to address these issues.
b. Scalability and Infrastructure: Efficient Hardware Utilization: Advances in
hardware, such as specialized AI chips and distributed computing, are essential
for scaling up LLMs.
c. Energy Efficiency: Research is also focused on making LLMs more energy-
efficient to reduce their environmental impact.
7. Interpretability and Transparency:
a. Explainable AI: Enhancing the interpretability of LLMs to understand and trust
their decision-making processes is a critical area of research.

These advancements will help make LLMs more powerful, efficient, and applicable across a
wider range of tasks and industries.

Page 5
Fine-tunning Strategies for Large Language Models(LLMs)

Fine-tuning involves training a pre-trained model on a specific dataset to adapt it to a particular


task. This process adjusts the model’s internal parameters to improve its performance on that
task There are several strategies for fine tuning for LLMs and also methods and some of them
are as follows:

1. Prompt Optimization: This involves crafting specific prompts to guide the model’s
responses more effectively. By providing clear and detailed instructions, you can
improve the relevance and accuracy of the model’s outputs1.
2. Model Compression: Techniques like pruning, quantization, and knowledge distillation
can reduce the size of the model while maintaining its performance. This makes the
model more efficient and faster2.
3. Chain-of-Thought Prompting: This method encourages the model to generate
intermediate reasoning steps before arriving at a final answer. It helps improve the
model’s performance on complex reasoning tasks3.
4. Hyperparameter Tuning: Adjusting hyperparameters such as learning rate, batch size,
and number of epochs can significantly impact the model’s performance during training4.
5. Ensemble Methods: Combining the outputs of multiple models can improve overall
performance. This approach leverages the strengths of different models to produce more
accurate and robust results4.
6. Active Learning: This involves iteratively selecting the most informative data points for
labeling and training. It helps in efficiently improving the model’s performance with less
labeled data4.
7. Data Augmentation: Enhancing the training dataset with additional, relevant data can
improve the model’s performance. This can include synthetic data or data from similar
tasks4.
8. Reinforcement Learning from Human Feedback (RLHF): Using human feedback to
guide the model’s learning process can be particularly useful for tasks where human
judgment is crucial4.

Please list your Fine-tuning strategies for your AEC CAD-LLM business case:
[Your firm]:_________________________ [Your name]:_______________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________

Page 6
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________

What is Retrieval Augmented Generation

Apart from fine-tunning for LLMs, there are several other methods for enhancing the
performance of large language models. Retrieval-Augmented Generation (RAG) is one of a
complementary approach to enhance the performance of large language models (LLMs).

Conversely, RAG integrates both a retriever and a generator. The retriever pulls pertinent data
from external sources, which the generator leverages to create responses that are more precise
and contextually appropriate. This approach is especially advantageous for tasks that need
current information or extensive knowledge bases. While fine-tuning alters the model itself, RAG
enhances the model’s capabilities by supplying additional context from external data sources.

Different Open Source frameworks which allow you to do RAG(Retrieval


Augmented Generation)

There are several open-source frameworks that support Retrieval-Augmented


Generation (RAG). Here are some notable ones:

1. Haystack: Created by Deepset, Haystack serves as a comprehensive framework


for constructing RAG pipelines, which is particularly beneficial for document
search and question answering.
2. Langchain: This framework aids in developing context-aware applications
powered by language models, facilitating the integration of retrieval and
generation tasks.
3. REALM: Google’s Retrieval-Augmented Language Model (REALM) is tailored for
open-domain question answering and incorporates retrieval within the training
process.
4. Hugging Face Transformers: Hugging Face offers tools and models that
support RAG, enabling the creation of custom retrieval-augmented generation
systems.
5. Weaviate: An open-source vector search engine utilized for building RAG
applications by storing and retrieving embeddings.
6. LlamaIndex: A framework designed to assist in developing RAG applications by
indexing and fetching relevant documents.
7. NVIDIA NeMo Guardrails: Provides resources for constructing safe and
dependable RAG applications.
8. RAGFlow: An open-source RAG engine based on deep document
comprehension, offering an optimized workflow for various applications.

Page 7
Define your RAG frameworks or your LLM fine-tunning methods here for your AEC use-case
[Your firm]:_________________________ [Your name]:_______________________

______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________

Preparing BIM Data with PyRevit, C#/.NET Addin

We first create a Revit File in version 2024. I have explained in the presentation on how one can
prepare a BIM Data using PyRevit for generating Embodied Carbon tracking. You can explore
design your Revit model and follow the process for Karamaba3D and Galapagos optimization.
For us, the most important part is how C#/.Net add-in can be used for generating live-
connection to Neo4J Aura Database and also how One can use a sample CSV file extracted
from Rhino and Revit which stores the meta-data and visualize in Neo4J Aura Database.

.Net AddIn Reference Code for Neo4J Driver connection

For this project, I am using the reference for Neo4J C# Driver connection offered by e-verse
platform. The source code is available on the Github link: Integrate Neo4j with Revit: Graph
Database for AEC Data (e-verse.com)

Source: e-verse ( AEC Tech consulting firm)

Page 8
Manually loading CSV in Neo4J Aura DB for Testing purposes

Please use the PDF provided in the handouts in additional resources “AU2024_Pratt
Truss CSV visualization inside Neo4J Aura DB.docs” or “AU2024_Pratt Truss CSV
visualization inside Neo4J Aura DB.pdf” uploaded on Autodesk Drive

If you are unable to use this it will be available soon on the Autodesk University website

Anatomy of the Karamaba3D Optimization Grasshopper Scripts

For Single- Objective Optimization process, I am using Galapagos twice in my computational


design grasshopper pipeline. This is currently not performing multi-objective optimization and
hence we need to explore Multi-Objective Optimization.

Lets explore the Grasshopper script we used. We have divided the script into five parts:

1. Using Rhino Inside Revit for extracting Steel Section Profiles from Revit model

Here is the closeup view

Page 9
2. Rebuild a Pratt Truss as per specifications and also for designing the custom webbing

3. Define Cross-section as below

Page 10
4. Define Loads and Supports as below

5. Define sorting procedure as below

Page 11
6. Define Elements and Build a Truss

7. Optimize Cross-section using Karamaba3D’s OptiCrossSection component as below

8. Extracting Mesh for One Click LCA profiles as below

Page 12
9. Define Single Objective Optimization using Galapagos as below( Second time)

Different Structural Optimization techniques for Intelligent Optimal Truss Design

For the Multi-objective Optimization for reducing the embodied carbon and the structural weight
of the Pratt Truss, there are several off-the-shelf plugins like Wallacei, Opossum and Octopus
which you can find on Food4Rhino website as below:

Page 13
Source(Left to Right): Octopus, Wallacei, Opossum(Grasshopper logos add-in from Food4Rhino website)

We are currently developing a workflow using One Click LCA and Octopus and is work under
development and will be presented in upcoming conferences. One can develop a similar
optimization process for their use-cases.

More readings are available on:


https://www.researchgate.net/publication/373694681_Modern_software_features_for_shape_op
timization_of_shells

Precedents in AEC Models as a Graph

Semantic Representation of Architectural Elements are active subject of investigation. I would


like to credit Professor Wassim Jabi’s tool TopologicPy, Professor Thomas Wartman’s group at
Institute of Applied Architecture,(ITA Zurich)

Please refer to the presentation for precedents.

Querying the Revit Structural Model in APS& GraphQL

What is GraphQL

GraphQL is an open-source query language for APIs and a runtime for executing those queries
with your existing data. It was developed by Facebook in 2012 and released publicly in 2015.
Link to GraphQL website:

Key Features of GraphQL:

Page 14
• Flexible Queries: Clients can request exactly the data they need, no more and no less.
This reduces the amount of data transferred over the network and improves
performance1.
• Single Endpoint: Unlike REST APIs that require multiple endpoints for different
resources, GraphQL APIs use a single endpoint to fetch all the required data2.
• Strongly Typed Schema: GraphQL APIs are defined by a schema that specifies the
types of data that can be queried. This schema ensures that clients only request valid
data and helps in catching errors early2.
• Real-time Data: GraphQL supports subscriptions, allowing clients to receive real-time
updates when data changes.
• Introspection: Clients can query the schema itself to understand what queries are
possible, making it easier to explore and use the API.

How GraphQL Works:

• Schema Definition: Developers define a schema that describes the types of data and
the relationships between them.
• Queries and Mutations: Clients send queries to request data and mutations to modify
data. The server processes these requests and returns the appropriate data.
• Resolvers: Functions that handle the logic for fetching the data specified in the schema.
Each field in the schema has a corresponding resolver.

What is Langchain

Insert AU caption.

Langchain is a framework designed to facilitate the development of applications that leverage


large language models (LLMs). It provides tools for prompt engineering, data retrieval, model
orchestration, and workflow management, making it easier to build complex applications that
require natural language understanding and generation.

Langchain is not specifically a Retrieval-Augmented Generation (RAG) framework, but it can be


used to build applications that incorporate RAG techniques by integrating retrieval mechanisms

Page 15
and generation models. Langchain is a versatile framework designed to facilitate the
development of applications leveraging large language models (LLMs). It provides tools for
prompt engineering, data retrieval, model orchestration, and workflow management. For
example, you can use Langchain to retrieve relevant documents or data and then use an LLM to
generate responses based on that information.

Key Features of Langchain:

1. Prompt Engineering: Helps in designing and managing prompts for LLMs.


2. Data Retrieval and Integration: Facilitates the retrieval of relevant data and its
integration into the application.
3. Model Orchestration: Manages the interaction between different models and
components.
4. Debugging and Observability: Provides tools for monitoring and debugging
applications.
5. Deployment and Production Readiness: Supports deploying applications in
production environments.
6. Query Decomposition: Langchain can decompose complex queries into subqueries,
which can then be executed against the graph database.
7. Dynamic Prompting: Use Langchain to dynamically generate prompts for the LLM
based on the context provided by the graph queries.
8. Hybrid Retrieval: Combine vector semantic search with traditional graph queries to
enhance the retrieval process. Langchain can route queries to the appropriate retrieval
method based on the query type.

Alternatives to Langchain:

1. Guidance by Microsoft: A framework for building and managing LLM applications with
a focus on enterprise use cases.
2. Hugging Face Agents: Tools and libraries from Hugging Face for developing and
deploying LLM applications.
3. Griptape: A framework for building AI applications with a focus on modularity and ease
of use.
4. Haystack by Deepset: An open-source framework for building search systems and
question-answering applications.
5. Dust: A platform for building and deploying AI applications with a focus on simplicity and
scalability.
6. AutoChain by Forethought Technologies: A tool for automating workflows and
integrating AI into business processes.
7. Semantic Kernel: A library for integrating semantic search and retrieval into
applications.
8. LlamaIndex: A framework for building and managing LLM applications with a focus on
indexing and retrieval.
9. SuperAGI: A platform for building and deploying AI applications with advanced
orchestration capabilities.
10. FlowiseAI: A drag-and-drop UI for building LLM flows and developing Langchain apps.

Page 16
Exploring Potentials for CAD-LLM Research beyond optimization

What are Knowledge Graphs: Things not strings

As we know LLMs lack the context and knowledge retention. Hence, complex question asked by
users need to be solved by RAG’s.To address reasoning via Semantics(things and context) and
not text strings, we use Knowledge Graphs(Example : NASA – Network of Boxes in Google
search). Wonderful high-fidelity relationships and What is High centrality, what is likely to come
next, predict links which will emerge(mutate)

The first early implementation of Knowledge Graphs was implemented by Google’s patented
algorithm called as PageRank as shown in the picture below. The parent expired in 2019 and is
being used by several Graph Databases to solve this issue.

Source: Wikipedia Patent expired in 2019

Add Semantics using Knowledge Graphs for entropy to creep ( Microsoft Research Blog) – Ask
LLM to generate this prompt for this data – Dr. Jim Webber, Chief AI, Neo4J. LLMs are
approximate non-deterministic system(database), quite acceptable and amenable to humans.
But they are not always true !! The most primal use-case for LLMs for humans is fact-checking
and knowledge inquiry.

How to do semantics work in Neo4J Knowledge Graphs – Came from semantic web.
Organizing principles: nodes,properties,direction,labels,edges,relationship. share that data.
SparkSQL vs Neo4J – Semantic Web Stack. ( Book by Dr. Jim Webber).

Page 17
What is Neo4J

SOURCE: HTTPS://NEO4J.COM/

Neo4J is a powerful graph database that can store and manage the knowledge graphs
generated by GraphRAG.

• Cypher Queries: Use Neo4J’s Cypher query language to perform complex graph
queries, enabling detailed analysis and retrieval of information.
• Vector Indexing: Leverage Neo4J’s vector indexing capabilities to perform semantic
searches, enhancing the retrieval process by combining traditional graph queries with
semantic similarity searches.

You can read more about this in the hyperlink website provided to you.

Page 18
What is GraphRAG

SOURCE: HTTPS://MICROSOFT.GITHUB.IO/GRAPHRAG/

GraphRAG is a technique developed by Microsoft Research that enhances Retrieval-


Augmented Generation (RAG) by using knowledge graphs. Here’s a brief overview:

1. Knowledge Graph Creation: GraphRAG uses Large Language Models (LLMs) to


extract entities, relationships, and key claims from raw text, forming a structured
knowledge graph.
2. Hierarchical Clustering: The extracted information is organized into a hierarchical
structure using techniques like the Leiden algorithm, which helps in understanding the
dataset holistically.
3. Community Summaries: Summaries are generated for each community within the
graph, aiding in the comprehension of large datasets.
4. Enhanced Querying: At query time, these structures are used to provide more relevant
and contextually rich responses by leveraging both global and local search modes.

GraphRAG aims to improve the performance of LLMs on private datasets by providing a


more nuanced understanding of the data compared to traditional RAG approaches.
Augmenting Knowledge Graph using GraphRAG via Neo4J,Autodesk Platform
Services GraphQL and Langchain for achieving One Computational Knowledge
Graph Data Model (OCKGDM).

As mentioned in the presentation, GraphQL, and Neo4J are match made in heaven.
Augmenting GraphRAG with Neo4J, GraphQL, and Langchain can significantly enhance its
capabilities. Here’s how each component can be integrated: By integrating Neo4J, you can:
Store GraphRAG Outputs, Import the structured knowledge graphs created by GraphRAG into
Neo4J for efficient storage and querying.

Page 19
Autodesk Platform Services (APS) GraphQL can be used to create a flexible and efficient
API layer for querying the knowledge graph:

• Unified Query Interface: GraphQL provides a unified interface to query both the graph
database and other data sources, making it easier to fetch and manipulate data.
• Dynamic Queries: With GraphQL, you can create dynamic queries that allow clients to
specify exactly what data they need, reducing the amount of data transferred over the
network.
• Integration with Autodesk Services: APS GraphQL can integrate with various
Autodesk services, enabling the retrieval of design and engineering data that can be
linked to the knowledge graph2

SOURCE: HTTPS://NEO4J.COM/DEVELOPER-BLOG/MICROSOFT-GRAPHRAG-NEO4J/

This integration can significantly improve the performance and accuracy of retrieval-augmented
generation systems by leveraging the strengths of each component.
Future Workflows beyond Optimization for OCKGDM:

Several future workflow integration includes:

1. Using Lang Graph for Agentic orchestration.


2. Combining Microsoft Autogen and Azure AI studio workflows on document intelligence.
3. Combining Microsoft Copilot Studio agents for document intelligence.

THANK YOU FOR READING THIS HANDOUT – APPRECIATE THIS A LOT

[Your firm]:_________________________ [Your name]:_______________________

Page 20

You might also like