Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

MindStudio Documentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 78

MindStudio Documentation

MindStudio is a platform for developing model agnostic AI-powered applications.


The Application Layer of AI
This application layer acts as the bridge between raw AI capabilities and real world
utility. While foundational models provide the base intelligence, the application
layer molds it into a shape we can use and gives it specific capabilities that stretch
far beyond any individual model.
Building AI powered apps using MindStudio is simple and doesn't require any
technical knowledge to get started.

With MindStudio you can:


Fine-tune and publish an AI-Powered app using an intuitive no code interface

Share AIs via a simple link

Charge subscription fees for access to your AI

Track the usage of your AI via your AI dashboard.


Getting Started with MindStudio


Everything you need to know to start building your first AI

Overview

MindStudio is a no-code AI app builder that allows you to create complex AI


applications. MindStudio leverages foundational AI models and includes built-in
resources to expand the capabilities of your AI.
In this guide, weʼll go over all the essential components that will allow you to get
started in building your first AI.

Creating a New AI

To create a new AI, navigate to “My AIs” and then tap “New AI”. This will open
MindStudio, our Integrated Development Environment (IDE) for creating AIs.
In the bottom left you can find quickstart links to tutorials, pre-built templates, the
Prompt Generator Engine, and more.

The Prompt

The Prompt is the set of initial system instructions that you provide to your AI. The
Prompt allows you to define your AI's role and how it is meant to assist the user.
Simply type in your instructions or use AI to generate new prompts using our open
source Prompt Generator Engine.

Automations

Automations are no-code workflows that instruct the AI to perform various


automated actions. All automation workflows consist of 3 parts:
Q. 1.Onboarding State – Initializes the automated workflow.
R.
S.
2.Automation Blocks – Each block performs a specific action such as
collecting user inputs, querying data, performing custom java script functions,
and more.
T.
U.
3.End State – Ends the automation and begins the user to AI interaction.

Data Sources

Data Sources allow you to integrate custom data to enhance the capabilities of
your AI. When you add a new Data Source, you allow the AI to reference that data
before generating a response to your users.
Integrating a custom data source allows the AI to leverage additional information
and domain-specific knowledge beyond its default training data. This provides a
simple way to customize the AI's knowledge for your specific needs.

Variables

Variables are used to represent parts of data that we pass through the AI.
Variables can be recognized by the AI in both the prompt and automation sections
of the editor.
Adding double brackets around your variable name will allow the AI to recognize
that variable. Example: {{myVar}}

Model Settings

Different LLMs have varying degrees of productivity. Depending on the use case
of your app, some models might perform better than others.
MindStudio allows you to choose the best model for your particular needs.

Publishing

You must configure your AIʼs settings before publishing.



General - Provide your app details and build a landing page.

Access - Configure your share settings and set a price.

Billing - View invoices and configure payment settings.

To configure the publish settings, click on the Root file on the left panel.

Testing and Debugging

When writing your Prompt you may test the response output by using the Prompt
Tester on the right. You can also quickly troubleshoot your app by using the Error
and Debugger tab on the left panel.

To preview the AI without publishing, you can open the AI in draft mode by tapping
on the Open button at the top right and selecting Preview Draft.

AI Dashboard

Every app comes with an individual dashboard that allows you to track your total
earnings, user counts and more.

How To Write Prompts in MindStudio


Everything you need to write your main prompt
Prompt Writing Overview

The Prompt is the set of initial system instructions that you provide to your AI. The
Prompt allows you to define your AI's role and how it is meant to assist the user.

You can locate the prompt section by navigating to the Main.flow file and then
tapping on the Prompt tab.

Prompts are written using natural language. There is no code knowledge required
to write out your prompt.

NOTE: To improve readability for collaborators or for later editing, you may
optionally write your prompt using Markdown. Check out the Markdown
Cheatsheet for more information on using Markdown.

Using the Quickstart Menu

When you first create a new AI, you'll see the Quickstart menu panel in the bottom
left of the prompt environment. These tools are meant to assist you in writing your
prompt if your not sure where to start.

Default Starter Prompt

The Default Starter Prompt will import a basic prompt to help get started with
writing your own.

Browse Templates

The Browse Templates button will open up a modal of pre-built templates to help
with specific use cases. You an import the template and then edit the prompt to fit
your needs.

Generate Prompt

This will open the Prompt Generator Engine where you can use AI to generate the
prompt for you. Simply type in your apps function and let the AI do the rest.
NOTE: You can also link to video tutorials or browse our Discord community to
learn more about building AIs.

Prompt Writing Basics

The content of your prompts will vary for each AI depending on your AI's specific
use case and requirements.
When writing your prompt, it's good practice to start by defining your AIs role and
then giving it a task to complete. To distinguish between the AI and the user, you
may want to refer to your AI as “Assistant” and the user as “Human”. Also, make
sure to use a third person POV when writing out your prompt instructions.

Example: The Assistant is a Youtube Metadata Generator that assists Human with
generating Youtube titles and descriptions for their videos.
Properly formatting your prompt can drastically improve your AIʼs responses You
can break up your Prompt into sections and allow each section to describe a
different element of how the AI should act.

For example, you can use a number sign to create a header and then create a
section called Assistant Output. You can then create line items of all the elements
youʼd like the AI to output at the end of the workflow.
You can follow the same steps for other sections of the prompt.

NOTE: Make sure not to overload the AI by adding the same variable multiple
times throughout the prompt. This will provide the AI with superfluous context,
and may result in undesirable responses.
You can also inject additional context from variables into your Prompt by using
double braces and the variable name.

Example: The Humanʼs name is {{name}}.

Select the AI Model for Your Use Case


Select which AI model(s), also called Large Language Models (LLMs), to use in
your app.
Overview

MindStudio supports the most foundational AI models, called Large Language


Models (LLMs). LLMs do not always generate similar output, even when using an
identical prompt. Each LLM has strengths and drawbacks for particular use cases.
Understanding those strengths, drawbacks, and how they apply to your AI-
powered appʼs use case(s) guides you to which LLM to select for each AI
interaction your app sends.
Since MindStudio is AI model agnostic and can use multiple models within the
same AI-powered app, MindStudio can leverage the strengths for a particular LLM
for a specific use case in your app, while avoiding any LLM drawbacks.
This topic describes best practices when selecting an LLM for your appʼs use
case.
AI Models MindStudio Supports

The sections below outline in alphabetical order by organization the AI models,


also referred to as large language models (LLMs), that MindStudio supports. Each
section outlines the following for each LLM:

LLM Name: LLM Name displays the name of the LLM.

Training Date: The Training Date column displays to which date the LLM has
been trained on “the Internet at large” that the LLM can access. The more
recent the training date, the more currently available data upon which that
LLM can process, parse, and evaluate in preparation for its response.

Notes: The Notes column displays any notes, if applicable.

MindStudio Plan: The MindStudio Plan column displays the lowest plan that
MindStudio supports that LLM.

Anthropic

The following are the Anthropic LLMs that MindStudio supports.


LLM Name
Training Date
Notes
MindStudio Plan
Claude 3 Opus
August 2023

Best mimics human writing.

Largest context window size.

Pro and higher


Claude 3 Sonnet
August 2023
Claude 2.1
December 2022

Pro and higher


Claude Instant
December 2022

Free
Google

The following are the Google LLMs that MindStudio supports.


LLM Name
Training Date
Notes
MindStudio Plan
Gemini Pro
December 2023
Faster than GPT 4 Turbo while matching quality.
Free
PaLM 2
Mid 2021

Free
Meta

The following are the Meta LLMs that MindStudio supports.


LLM Name
Training Date
Notes
MindStudio Plan
Llama-2 13B Chat
Early 2023

Pro and higher


Code Llama
Early 2023

Free
Llama-2 70B Chat
Early 2023
Fine-tuned for coding and codebase usage.
Free
Mistral AI

The following are the Mistral AI LLMs that MindStudio supports.


LLM Name
Training Date
Notes
MindStudio Plan
Mixtral 8x7B Instruct
September 2023
Lowest cost usage.
Free
Mistral 7B Instruct
September 2023

Free
OpenAI

The following are the OpenAI LLMs that MindStudio supports.


LLM Name
Training Date
Notes
MindStudio Plan
GPT 4 Turbo
April 2023

Best for reasoning and general usage.

Is less restrictive.

Highest quality of responses.

Generally most reliable.

Most support for non-English languages.

Pro and higher


GPT 4
April 2023

Pro and higher


GPT 3.5 Turbo 1106
September 2021
Free
GPT 3.5 Instruct
September 2021

Free
General Use Case Strengths and Drawbacks in Specific LLMs

LLM Strengths

Why is this relevant?

MindStudio allows you to use multiple LLMs within one app. Take advantage of the
LLM that best addresses the need for each task.
For example, general usage LLMs can do the following well:

Multimodal: They can process text, speech, and images.

Multipurpose: They can classify, generate, and translate.

Larger data sets: They can leverage the knowledge and data from a large and
diverse set of data.
However, if writing is critical to the task that mimics human writing well in regards
to fluidity and style, without repeating phrasing, then only particular LLMs do that
well.
As a best practice, consider these strengths for the following LLMs when planning
your AI-powered app:
LLM Name
Description of Strength(s)
Claude 3 Opus

Best mimics human writing, including creative writing and copywriting.
Articulates thoughts clearly with the most human-like style.

Largest context window size.

Fastest response time.

OpenAI GPT 4 Turbo



Best for reasoning and general usage. Recommended for Chain-of-Thought
(CoT) prompting technique.

Is less restrictive regarding acceptable queries and response content.

Highest quality of responses.

Generally most reliable.

Most support for non-English languages.

Mistral AI 7B Instruct

Lowest cost usage, yet performs well. Is also the less restrictive in response
content.

Highest quality of response for open-source LLMs.

Fastest response time for open-source LLMs.

Most non-English language support fro open-source LLMs.

Google Gemini Pro


Comparable response time to GPT 4 Turbo with comparable response quality.
Meta Llama-2 70B Chat
Fine-tuned for coding and codebase usage.
LLM Drawbacks

Why is this relevant?

Avoid the drawbacks that specific LLMs have. For example, despite the strengths
of the general usage LLMs, they do have drawbacks:

Policy restrictions: Some companies restrict how their LLMs may be used.
This may become relevant for your app.

Biased or inaccurate: Some LLMs may be biased or inaccurate because of


the quality or diversity of the data on which they are trained.

Non-specialized: General usage LLMs are less efficient and less effective
than LLMs specialized to perform particular tasks or domains.

As a best practice, consider these drawbacks for the following LLMs when
planning your AI-powered app:
LLM Name
Description of Drawback(s)
Claude 3 Opus
Among the most restrictive LLMs. Avoids controversial topics.
Anthropic Claude 2.1
Among the most restrictive LLMs. Avoids controversial topics.
Mistral AI 7B Instruct
Permits NSFW content.
Google Gemini Pro
Among the most restrictive in acceptable requests. Diversity and inclusion policies
can generate inaccurate responses regarding historical figures.
Context Window Size

This section outlines the size of the context window for each LLM MindStudio
supports. The context window is the number of tokens that the LLM accepts as
part of an AI prompt. The more tokens a LLM supports, the more information or
data that LLM can process from a prompt. A token is approximately four
alphanumeric characters.

Why is this relevant?

If your AI-powered app must send a large amount of data as part of a prompt,
especially within a Send Message block, select an LLM with the largest context
window size within your MindStudio plan. If your prompt includes a Variable that
stores a large amount of data or references a Data Source from which to process
its data, context window size becomes very important.
The following table lists LLMs in order of largest context window size from top to
bottom.
LLM Name
Context Window Size (Tokens)
MindStudio Plan
Claude 3 Opus
200,000
Pro and higher
Claude 2.1
200,000
Pro and higher
OpenAI GPT 4 Turbo
128,000
Pro and higher
Anthropic Claude Instant 1.2
100,000
Free
Google Gemini Pro
30,720
Free
OpenAI GPT 3.5 Turbo 1106
16,385
Free
OpenAI GPT 4
8,192
Pro and higher
Google PaLM 2
8,000
Free
Meta Code Llama
4,096
Free
Meta Llama-2 13B Chat
4,096
Free
Meta Llama-2 70B Chat
4,096
Free
Mistral AI 7B Instruct
4,096
Free
Mistral AI Mixtral 8x7B Instruct
4,096
Free
OpenAI GPT 3.5 Instruct
4,096
Free
Quality of Responses

Why is this relevant?

Some tasks in your AI-powered app may require that the response be of high
quality: that it is accurate and complete. For example, if the LLMʼs task is to
perform a legal document analysis on a legal brief, then the quality of response is
a priority.
The following table lists LLMs in order of quality-of-response ranking from top to
bottom.
LLM Name
MindStudio Plan
OpenAI GPT 4 Turbo
Pro and higher
OpenAI GPT 4
Pro and higher
Mistral AI 7B Instruct
Free
Claude 3 Opus
Anthropic Claude 2.1
Pro and higher
Mistral AI Mixtral 8x7B Instruct
Free
OpenAI GPT 3.5 Turbo 1106
Free
Google Gemini Pro
Free
Anthropic Claude Instant 1.2
Free
Meta Llama-2 70B Chat
Free
Meta Llama-2 13B Chat
Free
Google PaLM 2
Free
Meta Code Llama
Free
OpenAI GPT 3.5 Instruct
Free
Response Time

This section outlines response times for LLMs that MindStudio supports.
Response time is measured as throughput is the rate at which the model can
process and generate text or responses in a given timeframe.

Why is this relevant?

Consider response time important based on the following factors:



What is the pain point you leveraging AI to address in your app? Since you
can leverage multiple LLMs throughout the workflow of your app, take
advantage of the LLM that best addresses the need for each task. If your task
requires deeper reasoning, requires greater accuracy, or review the vector
database for a Data Source, or is critical for your appʼs success, then as a best
practice consider response time secondary.
Does the fastest response time affect the user experience? If the app is
interacting with the user, such as a co-pilot chatting with the user, then as a
best practice consider response time to be a high priority for that task.

How much data is the LLM processing for a task? If the Send Message
block includes a large amount of data, such as the content stored by a
Variable or if the LLM references the vector database for a Data Source, then
response time is affected regardless of the LLM performing that task.

The following table lists LLMs in general order of throughput from top to bottom.
Throughput is measured as the number of tokens (tks) per second.
LLM Name
Throughput
MindStudio Plan
Anthropic Claude Instant 1.2
72.118
Free
Mistral AI 7B Instruct
69.238
Free
Mistral AI Mixtral 8x7B Instruct
69.238
Free
Google Gemini Pro
67.027
Free
Meta Llama-2 13B Chat
35.714
Free
OpenAI GPT 3.5 Turbo 1106
30.574
Free
Meta Llama-2 70B Chat
22.579
Free
OpenAI GPT 4 Turbo
25.126
Pro and higher
OpenAI GPT 4
19.563
Pro and higher
Anthropic Claude 2.1
14.144
Pro and higher
Claude 3 Opus
-
Pro and higher
Google PaLM 2
-
Free
Meta Code Llama
-
Free
OpenAI GPT 3.5 Instruct
-
Free
Response time benchmarking changes rapidly. Furthermore, semiconductor and
inference technology improves constantly that affect response times. Visit YouAIʼs
Discord server for updates.
MindStudioʼs debugger checks the usage and response times for all LLMs your AI-
powered app uses. See Debugger.

What is a Variable?

How to use the Send Message Block


Reliability and Updates

The following lists the organizations that receive the most funding for their LLMs,
from top to bottom. Select a hyperlink to view the MindStudio-supported LLMs
that organization develops.

OpenAI

Google

Meta

Anthropic

Mistral

Why is this relevant?

The organizations that develop both propriety and open-source LLMs receive
funding to improve reliability, usability, and new features. Reliability, stability, and
usability updates are priorities for these organizations, especially those that
provide enterprise-class services. Therefore, selecting an LLM for your AI-
powered app that provides high reliability and frequent updates ensures your own
appʼs reliability after you publish it.
Interacting with LLMs Using Non-English Languages

Why is this relevant?

After you publish your AI-powered app, users may interact with your app using
non-English languages. If the audience and user base for your app predominantly
does not use English, then leveraging LLMs that provide support to non-English
languages is a priority.
The following table outlines the LLMs that support non-English languages.
LLM Name
Non-English Languages
MindStudio Plan
OpenAI GPT 4 Turbo
Spanish, Italian, Indonesian, and other Latin alphabet-based languages
Pro and higher
Claude 3 Opus
Anthropic Claude 2.1
Spanish, Portuguese, Italian, German, and French
Pro and higher
Mistral AI 7B Instruct
Spanish, Portuguese, Italian, German, and French
Free

How to use Variables in MindStudio


Everything you need to know about getting started with Variables.
Overview

Variables are assigned values that represent various parts of data that are passed
through the AI. Variables are created, stored, and referenced across various parts
of your AI workflow.
Assigning Variables

New variables are created and assigned when:



A new user input is created

An automation block saves it's result/output.

Assigning User Inputs as Variables

Whenever you create a user input, you assign a variable name to that input. To do
this, tap on the plus icon next to the User Input folder in the left bar.

You can give your variable a name as part of the input creation process.

Assigning Automation Results as Variables

You can also create variables within Automations. From the Automation canvas,
you can create an Send Message block, or Query Data block, and assign them a
variable name.
Assigning Send Message Block Results

First, create a Send Message block by tapping on the plus icon on the automation
canvas. Then, in the configuration panel on the right you can add your message
and then under Message Settings, change the response behavior to "assign to
variable".

Add your variable name and then use that variable to reference the Send Message
in other parts of your workflow.
Assigning Data Source Query Results as Variables

First, create a Query Data block by tapping the plus icon on the automation canvas
and the selecting Query Data. Then, in the configuration panel on the right, select
your data source from the drop down menu and assign a variable name.

Now that query Data block can be referenced in other parts of your app using the
variable name enclosed in double braces.
Referencing Variables

You can reference variables inside of your Prompt and inside of Automation
Blocks.
Variables are referenced in your main prompt or in an automation block by using
double curly braces and the variable name.

Example: {{MyVar}}
Referencing Variables in Your Prompt

You can reference a variable anywhere inside of your prompt by using double curly
braces and the variable name.

NOTE: When using variables inside of your prompt, it's best practice to use the
variable only once. Make sure not to overload the AI by adding the same variable
multiple times throughout the prompt. This will provide the AI with superfluous
context, and may result in undesirable responses.
Referencing Variables in a Send Message Block

You can reference variables in a Send Message Automation by adding the variable
into your message using double braces.
Select the Send Message block, then type your message into the configuration
panel on the right.

Referencing Variables in a Query Data Block

When you run an explicit data query using the Query Data Source Block, you must
use the Query Template to instruct the AI to specify the query.
For example, if you want to query a user input against a data source, you can add
that variable name in the Query Template.

Other Places You Might reference Variables

Message Processing

If you use Retrieval Augmented Generation (RAG) inside of your Terminator Block,
you may include variables inside of your Message Processing Template.

Custom Functions

Some custom function blocks may be able to both call a variable and assign a
variable within the same block.
When you call a variable, you must make sure to wrap the variable name in double
curly braces.
When you assign the output variable, no double curly braces are needed.

Best Practices When Using Variables

Understanding what variables actually represent.

AIʼs read variables as raw data. This means that the placement of your variables
becomes an important factor in your AIʼs ability to accurately understand the
action it is supposed to take.
Bad Example:
Tell me about {{myBigDataSet}} and tell me something interesting.
Good Example:
Tell me something interesting that I might not know about the data set below.
Use the following data set as context for your response:
```
{{myBigDataSet}}
```
Donʼt repeat variables inside of the same area

Do not overload the prompt by referencing the same variable over and over again.
LLMʼs can become easily confused if data is repeated throughout its instructions.
Use variables in a clear and concise manner.
Bad Example:
When you respond, take into account the name of the Human found in
{{aboutUser}}
I also want to know about Humanʼs food preferences from {{aboutUser}}
In {{aboutUser}} the Human refers to his age, take this into account talk to them
like the age provided.
Good Example:
## Response Formatting
When you respond, take into account Humanʼs Name, food preferences, and age.
Tailor your responses to Human.
## About Human:
{{aboutUser}}

How to Use Data Sources in MindStudio


Everything you need to know about using Data Sources
Data Sources

Data Sources allow you to integrate custom data to enhance the capabilities of
your AI. When you add a new Data Source, the file is converted to a vector
database the AI can query before generating a response to your users.

Integrating a custom data source allows the AI to leverage additional information


and domain-specific knowledge beyond its default training data. This provides a
simple way to customize the AI's knowledge for your specific needs.
Create A Data Source
To create a Data Source, tap on the (+) icon next to the Data Source folder in the
left panel. This will open up a new tab where you can add a name and short
description for your Data Source.

Then tap upload files to add the files youʼd like to include in your Data Source. You
are able to include multiple files in a single Data Source.

Once youʼve finished uploading your Data Source, you can tap the arrow to
preview your uploaded file and view the extracted text, raw data, and raw vectors
that were created.

You can access and make edits to your data source by going to the Data Sources
folder in the left panel.

Referencing A Data Source

Create Query Data Automation

You can reference your Data Source by using a Query Data automation block.
Create a new block by tapping the plus icon on the automation canvas and select
“Query Data”.

On the right hand side you select your desired Data Source from the drop down
menu, give it a variable name, and set the max results youʼd like the AI to query.

In the Query Template, provide instructions on how the AI should query the Data
Source. This query can include variables from other inputs or automations.
Example: I am looking for information about {{topic}}.

NOTE: Uploading a Data Source DOES NOT mean the AI now knows everything
about your file. You must instruct the AI on how to query that file for the AI to use
your Data Source properly.
Send Message Automation

Now that you have saved your query data block as a variable, you can reference
that variable in a Send Message block.
Tap the (+) icon on the automation canvas and select Send Message. In the space
provided type your message.
How to Use Automations in MindStudio
Everything you need to know about building out your AI automation workflow
Automations Overview

Automations are no-code workflows that instruct the AI to perform various


automated actions. All automation workflows consist of 3 parts:
Q. 1.Start Block – Initializes the automation workflow.
R.
2.Automation Blocks – Each block performs a specific action within the
workflow.
S.
3.Terminator Block – Ends the automation workflow and begins the user to AI
interaction.

You can locate the Automation canvas in the Main.flow file and then tap
Automations.

Onboarding

Onboarding is the initial set of inputs a user will interact with before entering the
main AI workflow.
You can create these inputs in your global settings by selecting the root file then
global settings.

Add your input(s) by tapping the (+) icon in the bottom left of the container. You
can add inputs you've already created or create new ones.

NOTE: You can also create and delete user inputs directly from the User Inputs
folder on the left resources panel.
Automation Blocks

Automation blocks act as triggers for a specific action the AI performs at a certain
section of the AI workflow. You can add an automation block by tapping the (+)
button directly on the automation canvas and select the block you want to use.

Current Automation Blocks

Q. 1.Collect input: Collects input directly from a user which can be stored as a
Q.
variable.
2.Query Data: Instructs the AI to reference a specific part of an uploaded data
source. The Data Source can also be stored as a variable and referenced in
other parts of the app.
3.Run Function: This block runs more complex functions powered by custom
Javascript. Check out Custom Functions to learn more about this block.
4.Send Message: This block allows the AI to send a synthetic user message
out of view from the user. This message can reference variables and ask the AI
to output a specific response.
5.Menu: This block creates a multiple choice user input that allows you to
branch responses to different blocks on your canvas.
6.Jump to Workflow: This block allows you to jump from one workflow to
another.

Terminator Blocks

The Terminator blocks are end state actions at the end of the automation workflow
and determines what the user to AI interaction looks like.

Current Terminator Blocks

Q. 1.Chat: This takes the user into a basic chat functionality between themselves
and the AI.
2.Revise Document: This makes the AIʼs initial output editable for the user
using a Rich Text Editor. The user can also make AI powered edits with a magic
wand tool.
3.Data Source Explorer: This shows the selected Data Source to the user
upon completion of the workflow. A user can simultaneously chat with the AI
while exploring the document.
4.End Session: This ends the session after every initial output. A user can
start a new session by opening up a new thread.

How to Configure Publishing in MindStudio


Everything you need to know about publishing your AI with MindStudio
The Root Settings

Once youʼve completed the build of your AI and you're ready to ship it, you can
configure the built in publish settings broken up into three parts:

General - Provide your app details and build a landing page.

Access - Configure your share settings and set a price.

Billing - View invoices and configure payment settings.

You can access the publish settings by tapping on the root file in the left panel.

General Settings

Details

Provide your AI with an app name and a short description.

Icons and Media

Upload an app icon and a social sharing card that will be displayed anytime you
share the link to your app.

NOTE: This section also gives you the option of adding an image gallery and a
preview video that will both be displayed on your apps landing page.
Landing Page

Create a detailed description and a footer where you can add up to 12 links.

NOTE: The landing page uses a Rich Text Editor, where you can use custom
formatting, link text, add images, and more.
Custom Branding

With custom branding you can choose the amount of white labeling, create
custom color schemes, choose font styles, and more.

NOTE: Branding and customization varies depending on your subscription plan.


Access Settings

Sharing

Configure the level of visibility provided to users.



Visibility: Choose whether to make your app public on the “explore” section of
YouAi or only visible with the app URL.

Remixing: Allow others to make a clone of your AI and build their own.

Password Protect: Restrict your app to users that have a custom password.
Those without the password can still access the landing page.

Embedding

Enable API access to embed your app on third party sites and more.
Check out our video on Embedding apps to learn more.
Pricing

Keep your app free or set a monthly subscription price for users.

Billing Settings

Subscription

Select or change your YouAi monthly subscription plan. All plans are per AI app.
Click here to view plan options.
Invoices

Access invoices from your paying customers.


Payment Settings

Setup and edit your payment settings via Stripe.

Testing and Debugging in MindStudio


Learn everything you need to know about testing and debugging your AI in
MindStudio.
Testing and Debugging

In this guide, we'll be going over the basics of testing and debugging your app in
MindStudio.
Prompt Tester
The Prompt Tester is a chat terminal that allows you to interact with your AI and
test different types of prompts and interactions to make sure your AIʼs thought
process is working properly.

You can access the Prompt Tester in the Main.flow section of MindStudio under
both the Prompt and Model Settings tabs.

NOTE: You can not use the Prompt Tester for automation flows.
Errors Tab

The Errors tab shows a list of errors that have occurred within the creation of the
app. Each error includes a short description of the issue and the section in which
the error is occurring.

Debugger

The Debugger gives you an in depth look into interactions with your app, where
you can find and debug any issues.

The Debugger breaks up the interactions into 4 main sections:


Q. 1.Message: This section shows the sent and received messages between you
and the AI.
2.Variables: This section shows all the variables used and the raw data
associated with each variable.
3.Preamble: The Preamble shows how the AI has interpreted your Prompt.
4.Prompt: The Prompt allows you to make edits to, or re-write your Prompt.

Draft State

The Draft state allows you to experience the actual flow of your app. This includes
onboarding, automations and the end state interactions.
The Draft also has its own built in debugger that will show you variables and raw
data as you interact with the app.

How to Create Multiple Workflows


Learn how to create multiple workflows within a single app
Overview

Creating multiple workflows within a single app allows for developers to utilize
different AI models, prompts and automation flows to improve the user experience.
In this guide, we'll discuss how to create additional workflows and implement them
into your app.
Creating An Additional Workflow

To create a new workflow, navigate to the root file and tap the (+) icon. Then,
select new workflow from the dropdown menu.

Once you've added you new workflow, configure the prompt for your app, the
automation workflow, and the model settings. You can rename or delete your
workflows at anytime by right clicking and selecting the desired action.

Utilizing Multiple Workflows

Create Jump to Workflow Block

The Jump to Workflow automation block triggers the switch between two
workflows. To use the jump block, tap the (+) icon or right click in the area of the
automation canvas you would like to switch workflows. Then select Jump to
Workflow from the dropdown menu.

Configure

In the configuration panel on the right, select the workflow you want to jump to.

You can create more jump blocks to move back and forth between other
workflows. Where to place your jump block will depend on your use case.

NOTE: Onboarding user inputs will exist globally and remain at the the beginning
of your app experience no matter the number of workflows you have.

Using the MindStudio API


Learn how to activate API functionality for your app.
Overview

With the MindStudio API you can enable the programmatically


invocation of workflows. This feature opens up several use cases,
particularly in conjunction with logic blocks and launch variables. It
allows for the integration of AI workflows as a step in larger
automation processes.
How to Access and Enable
First, tap on the root folder of your app and navigate to the API
Access menu.

Next, tap the Enable API Access button. This action will activate the
API functionality for the app. Upon enabling, you will receive the
App ID and an API key that can be used to make requests.

Making Requests

The primary requests that can be made using the API key are the
ability to run a workflow and loading an existing thread.
Running a workflow

To run a workflow, use the following code snippet. This request


allows you to invoke a specific workflow in your application,
optionally passing variables that the workflow can use.
import fetch from 'node-fetch';

const APP_ID = "your-app-id";


const API_KEY = "your-api-key";

const response = await fetch("https://api.youai.ai/developer/v1/apps/run", {


method: 'POST',
body: JSON.stringify({
appId: APP_ID,
// Optional. Invoke the app with specific variables that can be accessed as,
e.g., {{$launchVariables->demoVariable}}
variables: {
demoVariable: 'demoValue',
},
// Optional. If not included, default entry workflow will be invoked.
workflow: 'Main.flow'
}),
headers: {
"Content-Type": "application/json",
authorization: `Bearer ${API_KEY}`,
}
});
const data = await response.json();

console.log(data.threadId);
Note: Make sure to replace your-app-id and your-api-key with the
APP ID and API Key you obtained earlier.
Loading a thread

The code snippet below is a request that fetches the details of a


thread you've previously created.
import fetch from 'node-fetch';

const APP_ID = "your-app-id";


const API_KEY = "your-api-key";
const THREAD_ID = "replace with the ID of a thread you have created";

const response = await fetch("https://api.youai.ai/developer/v1/apps/load-thread",


{
method: 'POST',
body: JSON.stringify({
appId: APP_ID,
threadId: THREAD_ID,
}),
headers: {
"Content-Type": "application/json",
authorization: `Bearer ${API_KEY}`,
}
});
const data = await response.json();

console.log(data.thread);

NOTE: Note: Make sure to replace your-app-id and your-api-key


with the APP ID and API Key you obtained earlier.
Launch Variables

Launch variables can be used to pre-fill variables and invoke


workflows with specific data. These variables can also be utilized to
connect to automation tools like Zapier, providing a wide range of
use cases.

Create a New AI App


Create a new AI-powered app in MindStudio.
Create a New AI App
MindStudio makes it easy to created an AI-powered app in minutes without any
coding. Follow these steps to create a new AI app:
Q. 1.Ensure you have created a MindStudio account.
2.Select My AIs from the left navigation if it is not already selected.

Q.
3.Click Create New AI.
R.
S.

4.Do one of the following:

Create an AI app from a template: MindStudio has several templates to


quickly solve real pain points in your business. Select a template and then
click Next.

Use AI to generate a Prompt for your app: Let AI generate the Prompt
for your app! Select Generate Prompt and then click Next. See Use AI to
Generate the Prompt for Your App.

Start from a blank AI: See how easy it is to create an AI-powered app
from scratch! Select Blank AI, and then click Next.

What is a Prompt?
Understand how a Prompt instructs AI how to perform a task.
Overview

An AI Prompt is a concise set of instructions designed to guide and instruct the AI


model, also referred to as large language model (LLM), regarding what task(s) to
perform and/or what information to provide. The Prompt serves as clear,
structured directives to ensure the LLM understands and fulfills your goals
effectively.
Attributes of an Effective Prompt
Following are attributes of an effective Prompt:

Specificity: A precise Prompt leads to more accurate and relevant responses.

Contextual clarity: Clearly defining context in your Prompt can help the LLM
generate more consistent and relevant responses.

Structured format: Structuring your Prompt with clear instructions can help
the LLM understand and fulfill your requests accurately.

Explicit parameters, traits, and/or constraints: Mention any parameters,


traits, and/or constraints in your Prompt such as length, style preferences,
exclusions, or restrictions in what the LLM generates in its response. These
explicit statements guide the LLM effectively.

Use AI to Generate the Prompt for Your App


Learn how AI can generate the Prompt for your app.

Overview

The easiest way to write Prompts by using the Prompt Generator Engine built into
MindStudio. The Prompt Generator Engine uses AI to help you create custom
Prompts for your AI app in a matter of minutes.
Use AI to Generate the Prompt for Your App

Follow these steps to use the Prompt Generator Engine to create the Prompt for
your app:
Q. 1.Ensure the following:
1.You have created a MindStudio account.
2.You have created a new AI app from scratch. If you created your new AI
app from a blank AI, then remove any Prompt that displays, and then click
Generate Prompt.
R.
S.

Q. ▪ 4.

2.In What do you want your AI to do?, describe how you want your AI-
powered app to work.
Q.

Follow these guidelines:

Write in everyday natural language how you want your app to function.

Write in concise, clear statements.

Describe the actors in your app. AI is one actor. The person using your app is
another actor.

Optionally, write using lists:

Unordered list: Use an unordered list to describe separate functions your app
is to do.

Ordered list: Use an ordered list to describe functionality that the app is to do
in a specific order.
3. Optionally, from Generation Engine, view which engine will generate your
Prompt. By default, MindStudio's engine, named Default Engine (youai-
default-001), generates your Prompt.
4.After writing your app's behavior and functionality, click Generate. This
button is not enabled until text is entered into the Prompt Generator Engine.
5.Review the generated prompt and make sure all bullet points make sense
within the context of your specific use case.

Example

Below is an example to create an app that is written in two different formats: one is
written as a brief paragraph while the other is written using a list.

Text written in paragraph format to create an app that generates blog posts.

Text written using an ordered list to create an app that generates blog posts.

The Prompt Generator Engine generates the following Prompt from this example
that is ready to use. See how easy that is?

Prompt that is ready to use to build a MindStudio AI-powered app.


Contribute to the Prompt Generator Engine

The Prompt Generator Engine is an open-source project.


If you would like to contribute to this project by creating your own prompt
generation engine, click the GitHub link.

Write Your Own Prompt


Write your own Prompt that contains explicit instructions how your AI-powered
app is to work.
Overview

Instead of using AI to generate the prompt for your app using the Prompt
Generator Engine, you can write your own Prompt. When creating an app from a
blank AI, a sample Prompt displays for you.
Follow these steps to write or edit your Prompt:
Q. 1.Log in to MindStudio, and then edit your AI app. The Automations tab
displays by default.
2.Select the Prompt tab.
R.
S.

Q.

The Prompt editor displays the current Prompt. Start editing!

Want AI to generate your Prompt for you? Remove the current Prompt, and then
click Generate Prompt.

See Use AI to Generate the Prompt for Your App.


Guidelines to Writing Your Own Prompt

Writing your own Prompt is easy to do. Follow these guidelines:



Write using Markdown in your Prompt. See this helpful Markdown Cheatsheet.

Follow Prompt writing best practices.

Use Markdown in Your Prompts


Learn how to use Markdown to create more efficient Prompts that better instructs
AI.
Overview

Markdown is a lightweight markup language to add formatting elements to your


Prompt. Markdown is an easy-to-use syntax that represents text formatting and
structure. Include Markdown in your Prompts for the following reasons:
● ◦
Markdown structures your prompt into modular sections that better instructs
the AI model, also referred to as the large language model (LLM). For example,
structure your Prompt into separate sections:

Context and background that informs the AI without the user providing that
context;

The input the LLM uses to perform the task, such as Variables;

The role the LLM plays when performing the task, such as a blog generator or
training co-pilot;

The task the LLM performs;

Constraints to place upon the AI model that restricts what the LLM not to do.

Structure your Prompts to make them more efficient and organized for the
LLM to follow with consistent results. Prompts that contain unorganized
instructions can overwhelm and confuse the LLM produce undesired results.

Structure Prompts to make them more legible for collaborators on your project
or those that remix your project for their own AI-powered apps.

Prompt Writing Best Practices

What is a Variable?
This topic describes how to use Markdown in your Prompts for experiences your
app users will appreciate.
Sectional Formatting to Organize Instructions in Your Prompt

See these sections regarding organizing Prompt instructions into sections:



Headers

Lists

Bolded text

Other Markdown syntax

Headers

Create sections within your Prompt to break up information and make your
instructions more clear and concise. To create a section, use a hashtag symbol (#)
to create a header. Anything you type underneath that header will become part of
that section. See Overview how to structure your Prompt into sections.

Optionally, create sub-headers using two or three hashtag symbols to organize


your information even further:
Examples:
Markdown Headings
Heading Level
Heading 1
# H1
Heading 2
## H2
Heading 3
### H3
Lists

Create a list to organize important information. Use the following list types
depending on your use case:

Unordered list

Ordered list

Sub-list

Unordered List

An unordered list presents information for the LLM to recognize in no particular


order. To create an unordered list, use dashes (-), asterisks (*), or plus signs (+) in
front of each line item.

Example:
Unordered List
- First item
- Second item
- Third item
Ordered List

An ordered list presents information for the LLM to recognize in a specific order to
process. Unlike the unordered list, the ordered list uses a numerical system to list
out line items.
Example:
Ordered List
1. First item
2. Second item
3. Third item
Sub-List

Depending on your use case, optionally create sublists to further organize


information. To create sub-lists, indent the sublist items with two spaces.

Example:
Sub-List
1. First item

Sub item 1

Sub item 2
2. Second item
3. Third item
Bolded Text

In Markdown, you can use bolded text to emphasize the importance of a section of
your Prompt. Bolded text also makes your Prompt more legible for collaborators on
your project.
Create bolded text by using double asterisks (*) around the text to bold.

Other Markdown Syntax

Less frequently used Markdown syntax can help provide examples to the LLM of
different types of components or how to generate its output. Providing examples
in your Prompt can help ensure that the initial output of your app is consistent with
your instructions.
For example, if you need the initial output of your AI app to be formatted inside a
table, adding an example of a table provides the LLM with the context it needs to
perform that function.

Following are examples of additional Markdown syntax that you can include in your
Prompt:
Table
Syntax | Description |
| ----------- | ----------- |
| Header | Title |
| Paragraph | Text |
Task List
- [x] Write the press release
- [ ] Update the website
- [ ] Contact the media
Link
[Text](https://www.example.com)
Image
![alt text](image.jpg)
For more information on Markdown, see the Markdown Cheatsheet.

Prompt Writing Best Practices


Follow best practices to write perfect Prompts that generate desired and
consistent results.
Overview

Writing a well-designed Prompt is essential to create AI-powered apps that


function properly and consistently. This topic describes best practices to write
perfect Prompts.

Let AI generate the Prompt for you!

What is a Prompt?

Use AI to Generate the Prompt for Your App


Best Practices to Structure Your Prompt

Follow these guidelines to structure your Prompt:


◆ ▪ ▪ ▪ ▪
Organize your Prompt into sections using Markdown. See this helpful
Markdown Cheatsheet.

Structure your Prompt, as needed, into the following Heading 1


sections. Within each section, write each descriptor or instruction in
an unordered list.

Purpose: The Purpose section instructs the problem the AI-powered


app is to solve. Why are you creating this app? Explicitly describe the
app's purpose and the actors involved.
Best Practice: Your app involves at least two actors: the AI model,
also referred to as the large language model (LLM), and the app's
user. In the Prompt, refer to the LLM as Assistant. Refer to the user as
Human.
Best Practice: Explicitly state in the first listed item in the Purpose
section that Assistant is the app and what that app does. For example,
Assistant is an app that generates blog posts for users.

Context: The Context section describes any additional information,


context, or background to inform the AI how to perform the tasks.
Best Practice: Placing contextual information into the Prompt ensures
that AI processes it completely, versus placing contextual information
into a Data Source. If the contextual information is important and is
less than 50,000 to 60,000 tokens (between 200,000 and 240,000
characters, including spaces), then include it into the Prompt.
Best Practice: If the contextual information is not important and is
greater than 60,000 tokens, then add as a supported file into a Data
Source. If the contextual information is greater than 60,000 tokens,
then the AI model's contextual window, or how much data the AI can
process at once, is surpassed.

Examples: The Examples section provides explicit examples that


serve as more context to what you expect the LLM to generate in its
response. Examples are required when using the Chain-of-Thought
(CoT) prompting technique.

Tasks: The Tasks section explicitly instructs the tasks that the app
performs.
Best Practice: Describe upon which actor the LLM performs the task.
For example, does the LLM submit a message to Human (user) or to
another explicitly stated system? If the latter, ensure that the stated
system is explicitly described as an actor in the Purpose section.

Parameters: The Parameters section specifies the conditions under


which the LLM operates to fulfill the Prompt's instructions. This
section plays a crucial role in guiding the LLM's response by defining
the boundaries and specifics of the tasks.

Output: The Output section specifies the expected results from the
LLM's processing of the Prompt. This section outlines the specific
characteristics, format, and content that the app user expects in the
AI's response.
Best Practice: Explicitly specify information or content that must be
included in the LLM's response. This section guides the AI regarding
what topics, points, or data should be in the output to ensure it is
relevant and comprehensive. This is particularly important for Prompts
aimed at generating informative or educational content.
Best Practice: Specify the presentation format of the output if
necessary. For example, should the output be a list, table, or a specific
document structure?
Best Practice: Specify the file type to output if necessary. For
example, should the output be in plain text, CSV, PDF, HTML, or other
format?

Traits: The Traits section specifies the desired characteristics, tone,


style, and other qualitative aspects of the AI's output. This section
helps ensure that the LLM's response aligns with the user's
expectations in terms of presentation, voice, and overall feel.

Constraints: The Constraints section restricts the LLM's behavior and


output by setting specific rules or conditions. This section ensures
that the AI's responses focuses and narrows the output to be more
predictable, relevant, and aligned with the app user's needs.
Constraints streamline the LLM's processing by limiting the scope of
its responses, thereby improving efficiency and preventing information
overload.
Best Practice: Specify topics or themes that should be included or
avoided in the LLM's response. This helps in tailoring the content to
the user's preferences and avoiding irrelevant or sensitive topics.
Best Practice: In use cases where the AI is assigned a specific role
(such as a tutor, advisor, coach, or co-pilot), constraints can define
how it should behave within that role to maintain consistency and
appropriateness.
Best Practice: Specify the sources the LLM can or cannot use for
generating responses. This is crucial to ensure the credibility and
accuracy of the information provided.
Best Practice: Limits the length of the LLM's responses to ensure
brevity and conciseness. This is particularly useful for tasks where
information needs to be succinct.

Use Markdown in Your Prompts

Chain-of-Thought Prompting Technique


Prompt Writing Style Best Practices

Follow these best practices to write an effective Prompt:




Use clear and concise language in your prompts to avoid ambiguity and
ensure the AI comprehends your instructions accurately.

Keep your Prompts simple and straightforward. State one idea at a time.

Avoid repetitive lines, grammatically incorrect sentences, or the same Variable


repetitively. This writing style will overwhelm the LLM (AI model) and cause
issues.

Provide detailed instructions that generate consistent results.

Include context in your Prompts as additional information that guides the LLM
accurately.

Include what the LLM is to output, including what it is to generate and in which
format(s).

Include any traits, parameters, and/or constraints that limits how the LLM
performs its task or what it generates in its output.

Do Not Overcomplicate Your Prompt

Keep your Prompts simple and straightforward. Avoid unnecessary complexity that
may confuse the LLM and lead to inaccurate responses.
A common misconception about writing Prompts is that more information means
better results. There are some use cases that may require more complex Prompts
than others, but complexity does not always mean more text.
Adding repetitive lines, run-on sentences, or the same Variable repetitively
overwhelms the LLM and causes issues. To prevent these problems, state one idea
at a time and use Markdown syntax to keep organized.
Compare the examples below of a bad Prompt versus a good Prompt.

Improper Practice

The following partially-shown Prompt is badly worded and is not organized.


Bad Practice
##Incorporate relevant information from any available sources
Consider the topic of the discussion response, and then incorporate exactly how
relevant information from any available sources, including {{document_1}} and
{{document_2}} could best relate to the topic of the discussion response, and cite
them using APA in-text citations if provided (e. g., {{APA_Citation_1}},
{{APA_Citation_2}}).
If documents and citations are provided:
If both {{document_1}} and {{document_2}} are uploaded, and corresponding
citations like {{APA_Citation_1}} and {{APA_Citation_2}} are entered, please
incorporate information from these sources into your response. Include proper
APA in-text citations for each citation. If only documents are provided:
If only {{document_1}} and/or {{document_2}} are uploaded without specific
citations, use the information from these documents in your response. If citations
are mentioned within the documents, consider using those. Ensure that the
content you generate aligns with the documents' context. If only citations are
provided:
If citations like {{APA_Citation_1}} and/or {{APA_Citation_2}} are entered without
corresponding documents, please provide information based on the cited sources.
Use your general knowledge to elaborate on these sources as necessary.
If neither documents nor citations are provided:
In the absence of both documents and citations, you may generate content based
on general knowledge and context. Clearly state that the information is generated
and not directly sourced from specific documents or citations.
In any case, maintain proper APA citation style:
Ensure that in-text citations are formatted correctly in APA style and include
relevant details such as author(s) and publication year.

Proper Practice

The following partially-shown Prompt is well written and is organized. Note how
the LLM is referred to as Assistant and the app user referred to as Human.
Best Practice
# The Assistant
The Assistant is a personalized gift recommender that assists Human in finding
the right gift for an individual.
The Assistant understands the individual as: {{giftee}}

## Assistant Output
The Assistant **must always** provide Human with:
1. 5-8 gift recommendations
2. a link to purchase each gift
3. a description of your findings

## Assistant Formatting Rules


The Assistant **must always** use Markdown formatting.
Below is an example of a table:
Syntax | Description |
Header Title |
| Paragraph | Text |
Add Variables Correctly Into the Prompt

Variables are a powerful way to inject information into your Prompt that the user
entered into your app. The Variable represents that entered information. However,
how you add those Variables into your Prompt matters.
When an app user provides information via a User Input block, that data is stored
into a Variable. Using this Variable is a powerful and flexible way to instruct the
LLM (AI model) how to perform a task. However, how you add a Variable into your
Prompt is important. When not correctly entered following best practices, the
injected information the app user entered can make the Prompt wording become
grammatically incorrect or unclear, thereby confusing the LLM. Understanding
how to properly write Variables into your Prompt can mean the difference between
a functional and nonfunctional app.
Compare the examples below how not to use a Variable in a Prompt versus how to
do so. In these examples, the Variable is named style. Surrounding the Variable
with two pairs of curly braces injects the data the Variable stores into the Prompt
and replaces the text {{style}}. Note how in the improper example, after the
Variable value is injected into the Prompt, the instruction becomes grammatically
incorrect, thereby making it unlikely the AI understands the instruction.

Improper Practice to Add a Variable into a Prompt

After the Variable value is injected into the Prompt, the instruction becomes
grammatically incorrect, thereby making it unlikely the LLM understands the
instruction.
Bad Practice
The Assistant is a Copy Writer that assists Human in generating SEO-focused Blog
Posts in the writing {{style}} of Human.

Proper Practice to Add a Variable into a Prompt

Write the Variable after a colon (:) so that after the Variable value is injected into
the Prompt, the instruction remains grammatically correct and clear.
Best Practice
The Assistant is a Copy Writer that assists Human in generating SEO-focused Blog
Posts in the writing style of Human.

The Assistant understands Humanʼs writing style as: {{style}}.

What is a Variable?

How to use the User Input Block


Address the AI in the Prompt in the Third-Person Perspective

When writing your Prompt, address the LLM (AI model) in the third person, not in
second person. Addressing the AI in third person instructs the AI what role to
assume so as to perform the subsequent instruction(s) that follow. Providing the
AI which role to assume sets the context for which data the LLM has at its
disposal.

Improper Practice to Address the AI in the Prompt

This improperly written Prompt addresses the LLM in the second person
perspective.
Bad Practice
You are a Youtube Metadata Generator that generates Metadata for Youtube
videos.

Proper Practice to Address the AI in the Prompt

This properly written Prompt addresses the LLM in the third person perspective.
Note how the AI is referred to as Assistant, which is also a best practice.
Best Practice
The Assistant is a Youtube Metadata Generator that assists Human in generating
metadata for their Youtube videos.
Save Certain Actions for Automations

Not all instructions belong in the Prompt. As a best practice, use the Prompt to
define the AIʼs role and designated task(s). Specific actions belong in your
Automation Workflow.

Improper Practice

This improperly written Prompt instructs which action the LLM (AI model) should
take. Instead, use a Send Message block in the Automation Workflow that outputs
the response of that message directly to the app user.
Bad Practice
The Assistant is a LinkedIn Post Generator that assists Human with generating a
LinkedIn Post article about: {{topic}}.

Proper Practice

This properly written Prompt does not specify which action to take.
Best Practice
Write a LinkedIn post article about: {{topic}}.

Chain-of-Thought Prompting Technique


Learn about the Chain-of-Thought (CoT) technique when writing your Prompt for
your AI app.

Overview

Chain-of-Thought (CoT) prompting is a technique designed to enhance the


reasoning capabilities of AI models, also referred to as large language models
(LLMs), by guiding them to explain their reasoning process step by step. This
method is particularly effective for complex tasks that require a series of logical
steps to reach a conclusion. By analogy, it's like showing your math teacher how
you came to your answer when doing long division.
CoT prompting encourages LLMs to break down their thought process into
intermediate steps, making the reasoning more transparent and easier to follow.
Achieve this by providing exemplars that outline the reasoning process, which the
LLM then emulates when answering similar prompts. This technique can be helpful
when instructing the LLM to perform arithmetic, commonsense, and symbolic
reasoning. CoT prompting technique not only enhances the accuracy of responses
but also provides insights into how your selected LLM approaches problem-
solving. Not all LLMs perform problem solving equality.
Since you want your AI-powered app to potentially be used by multiple users
simultaneously, use automated CoT within MindStudio.

Select the AI Model for Your Use Case


Guidelines for Chain-of-Thought Prompting Technique

Follow these guidelines to practice Chain-of-Thought (CoT) prompting technique:



Require the LLM to think step-by-step: Explicitly state for the LLM (AI
model) to "think step-by-step" in your Prompt. This indicates to the LLM to
break down its reasoning into a structured response. Explicitly state for the
LLM to articulate each step of its thought process, leading logically from one
step to the next until it reaches a conclusion

Provide a specific problem statement: Clearly state the problem or question


that you expect the LLM to solve. Ensure that your problem statement is well-
defined, concise, and grammatically correct.

Provide examples: Provide several examples where the reasoning process is


explicitly outlined. Place your examples into an Examples section using
Markdown as described in Prompt writing best practices. This helps the model
understand the desired format and approach for the reasoning process.

Test your CoT Prompt and iterate to improve it: Test your CoT Prompt with
different configurations and refine it based on the outcomes. Pay attention to
how well the LLM follows the reasoning steps and adjust your Prompt
accordingly. The steps the LLM outlines should be accurate, relevant to the
problem you are solving, and contribute to solving that problem.

Use Markdown in Your Prompts

Prompt Writing Best Practices


Example of a Prompt Using Automated Chain-of-Thought
Prompting Technique

Following is an example that uses automated Chain-of-Thought (Auto-CoT)


prompting technique:
## Inventory Management Forecasting App

# Purpose

- The Assistant is an inventory management forecasting app.


- The app forecasts inventory levels for the upcoming quarter that is provided by
the Human.
- The app optimizes inventory levels based on sales data, current inventory levels,
and anticipated market trends.
- The Human provides the sales data, current inventory levels, and anticipated
market trends.

# Example
Refer to the following example how inventory level forecasting was performed last
quarter.
- Product A had a sales volume of 1,000 units. The current inventory level is 200
units. There is an anticipated market trend indicating a 10% increase in demand.
- Expected demand was calculated for the next quarter and adjusted inventory
levels as follows:

1. **Calculate expected demand:** Start with the sales volume of the last quarter
for Product A, which is 1,000 units. Anticipating a 10% increase in demand, the
expected demand for the next quarter would be 1,100 units.
2. **Adjust inventory levels:** Considering the current inventory level of 200 units,
to meet the expected demand of 1,100 units, 900 more units were ordered for
Product A for the next quarter.

# Goal
- The app's goal is to minimize overstock and understock situations to maximize
profitability.

# Output
- Explicitly state the step-by-step reasoning into a structured response.
- Output the response into a table with column headers "Previous Inventory",
"Anticipated Growth", and "Inventory Forecasting", in that order.
- Enter your step-by-step reasoning into the "Inventory Forecasting" column.

Variable Best Practices


Learn how to use variables in the most efficient way possible
Overview

Variables are how MindStudio passes raw data through different parts of the app.
In this guide, weʼll be discussing some of the best practices when it comes to
using variables efficiently.
Understand What Your Variables Represent

AIʼs read variables as raw data. This means that the placement of your variables
becomes an important factor in your AIʼs ability to accurately understand the
action it is supposed to take.
Below are examples of a right and wrong way to place your variables:
BAD:
Tell me about {{myBigDataSet}} and tell me something interesting.
GOOD:
Tell me something interesting that I might not know about the data set below.
Use the following data set as context for your response:

```
{{myBigDataSet}}

```
Do Not Repeat Your Variables in the Same Section

Do not overload the prompt by referencing the same variable over and over again.
LLMʼs can become easily confused if data is repeated throughout its instructions.
Use variables in a clear and concise manner.
Below are some examples of a right and wrong way a variable is being used:
BAD:
When you respond, take into account the name of the Human found in
{{aboutUser}}

I also want to know about Humanʼs food preferences from {{aboutUser}}

In {{aboutUser}} the Human refers to his age, take this into account talk to them
like the age provided.
GOOD:
## Response Formatting
When you respond, take into account Humanʼs Name, food preferences, and age.
Tailor your responses to Human.

## About Human:

{{aboutUser}}
Group Inputs When Needed

You can group inputs together under a single variable name that can be
referenced anywhere in your app. This is useful for referencing a collection of
inputs all at once without clouding areas of your app with too much text.
To do this, give each variable youʼd like to group the same name followed by
double brackets ([]). This will place them in numerical order under a new folder
with the same variable name.

You can reference the group collectively by using {{groupname}} or an individual


variable {{groupname[1]}} if needed.
Double Check Case Sensitivity

When creating variables anywhere in your app, make sure to always double check
the spelling and case sensitivity of your writing.
For example, if you have a variable {{MyVar}} and then reference that variable in
your prompt or automations as {{Myvar}}, the AI will not be able to recognize the
data.

NOTE: Always use the debugger to check any inconsistencies between variables.

How to Group Inputs into a Single Variable


Learn how to group inputs under a single variable name
Overview
When you create a user input, you assign that input its own variable name that can
be referenced in other parts of your workflow. But what if you want to reference a
collection of inputs at once without typing them out individually? In this guide,
weʼll show you step by step how to group your inputs into a single variable.
Create An Input Folder

To create an input folder, you first need to create a user input. To do this, navigate
to the resources panel on the left and tap the (+) icon next to the User Inputs
folder.

Next, fill out all the details of your user input. In the “Variable” section under
“General” is where you will add your variable name. Type in the variable name
followed by double brackets ([]).

Once finished, you will notice a new folder with your variable name as the title.
Add More Inputs

As you add more inputs, give them the same variable name followed by the double
brackets ([]). MindStudio will automatically store those variables under the same
folder, numbering them in the order in which you made them.

You can now reference those variables individually {{context[#]}} by using their
assigned variable name and number or use the folder name {{context}} as a
variable to reference all those inputs at once.

NOTE: You can change the variable names at any time if you no longer wish to
include a specific user input within that group.

How to use Input Logging


Learn how to keep logs of specific user inputs in your AI workflow.
Overview

Logging allows you to save user data from a specific input in your apps workflow.
This can aid in debugging or tracking events across your app. In this guide, we'll
walk through how to enable logging and where you can view the logged data.

NOTE: Logging is only available to users that are subscribed to our premium
plans.
Steps
Create a user input

Before we enable logging, we must first create a user input. To do this, tap the (+)
icon next to the user inputs folder in the resources panel, and fill out the
necessary information.

Enable Logging

Once you've filled out the details, scroll to the Advanced section of the user input
page, and from the Logging dropdown select Enabled.

In the Test Value box you can type in a test value that makes it easy to pre-fill
responses when testing in drafts.

NOTE: Inputs with logging enabled will publicly display the message "Responses
will be logged and visible to the developer".
Testing

To test your input, navigate to the preview button and select Preview Draft.
Select the "Use test value" button to quickly pre-fill the input.

View Logs

To view your logs, navigate to your apps Dashboard and select the Logs tab.

From here you'll be able to see all logged responses from that input. This also
includes the date of the response, the User ID, and the Thread ID.
Conclusion

With logging, you are able to see your users responses to specific inputs, allowing
for more detailed insights about the user experience and what you can do to
improve the overall functionality of your app.

How to use Auto Arrange


Learn how to organize your automation canvas with auto arrange
Auto Arrange

Auto arrange takes all of your automation blocks and arranges them in a vertical
line. To use auto arrange, tap the auto arrange icon in the bottom left of the
canvas.

NOTE: You can tap the spacebar on your keyboard to center your blocks.
Changing the Auto Arrange shape

The auto arrange feature also allows you to change the shape of the flow. To
change the shape, click-drag to highlight your blocks and then select the desired
shape in bottom left of the canvas.

Changing a section of blocks

You also have the ability to change arrange a section of blocks as opposed to your
entire canvas. to do this, highlight the section of blocks you'd like ot arrange and
then use the same arrange icon to select your shape.

NOTE: You can also select multiple blocks by holding shift and clicking the desired
blocks.

How to Add Onboarding Prompts


Learn how to create a set of global user inputs seen before you apps main
workflow.
Overview

Onboarding prompts are a global set of user inputs that users will interact with
before they enter your apps main workflow. In this guide, we'll take a look at how
to create onboarding prompts and when they are best used in your app.
Create an Onboarding Prompt

To create an onboarding prompt, you must navigate to your global settings by


selecting the root file and then global settings.

Select Entry Workflow

Select the entry workflow you'd like to create your onboarding prompts for. If you
only have one workflow, it will be selected by default.

Add User Inputs


To create a user input, tap the (+) icon in the bottom left of the onboarding
container. You can create inputs from scratch or add in existing inputs you've
already made.

Global Logging

Global Logging allows you to log user transcripts that can be accessed in the
dashboard section of your app. You can enable this feature by selecting Enable
from the dropdown menu.

NOTE: Logging is only available to those subscribed to our premium accounts.


Using Onboarding Prompts

When creating onboarding prompts, think about the data being collected as data
that is consistent but might need to be edited by your users later on.
Onboarding prompts should collect data that your users won't have to provide
every time they create a new session. This could includes items like sales team
preferences, market trends, and writing samples.
It is recommended that onboarding data be grouped under a single variable that
can be referenced in the prompt or in a Send Message automation block later
on.

How to Create Branching Automations


Learn how to create branching automations that cater to your users specific
responses
Overview

Branching capabilities have now unlocked even more potential when it comes to
building out your automation workflows. With branching you can now create
separate paths for your users depnding on their responses. In this guide, we'll
discuss how to implement branching into your application.
Menu Block

The Menu block is a multiple choice user input block that allows you connect
responses to other automations in your workflow.
To use this block, tap the (+) icon or right click anywhere on the automation
canvas and from the dropdown menu select Menu.
Configuration

In the configuration panel on the right, add your prompt question and answer
options. Tap the (+) icon at the bottom of the container to add as many answer
options as you'd like.

Assign Responses

To assign your response to a specific block, tap the circle icon next to your
response. When the automation canvas turns blue select the block you want to
send that response to.

NOTE: You will know that the response is assigned when the circle icon is filled
blue.
Conclusion

The menu automation block opens up the capability to create branching workflows
by assigning responses to different blocks on the automation canvas. This allows
your users to dictate the journey they take through your app and create a more
personalized experience.

How to use the User Input Block


Learn how to use the Collect Input block in you automation workflows
Collect Input Block

The Collect Input block lets you collect raw data inputs at a specific moment of
your automation workflow.
Create a Collect Input Block

To create a collect input block, tap on the (+) icon anywhere on the automation
canvas and select Collect Input from the menu.

Add User Inputs

Next, in the block configuration panel on the right, tap the (+) icon at the bottom
of the user inputs container.

This will open a modal where you can add an existing input or create a new one by
tapping “Create New”.

Once youʼve added one or more inputs you can rearrange the order, set inputs as
optional or required, and edit them by tapping the arrow icon next to the input
name.

How to use the Send Message Block


Learn how to use the Send Message block in your automation workflows
Send Message Block

The Send Message block is a synthetic message that is sent to the AI on behalf of
the user.

Note: The user will never see this message


Create a Send Message Block

To create a Send Message block, tap the (+) icon on the automation canvas and
then select Send Message from the menu.

Craft A Message

In the configuration panel on the right, you can type in your message in the space
provided. You can format this message to reference variables such as user inputs,
data sources, and more.

Note: You can also use this message to progress a chat without needing the user
to interact with the AI.
Message Settings

Response Behavior

The response behavior allows you to choose whether you want the AI's response
to the message to be displayed to the user or assigned to a variable that can be
referenced in other areas of your AI workflow.

Sender

The Sender dropdown allows you to dictate where your message is sent. If “user”
is selected, then the AIʼs response will be sent to the user without showing the
content of the message. If “System” is selected, the contents of the message will
be sent directly to the user along with a simulated response.

How to use the Query Data Block


Everything you need to know about running Data Queries in your automation
workflows
Query Data Block

The Query Data block allows you to run specific queries on a data source, and
assign the results to a variable.
Note: You must have created a Data Source in order to use the block.
Create a Query Data block

To create a Query Data block, tap the (+) icon anywhere on the automation canvas
and select Query Data from the menu.

Next, in the block configuration panel on the right, select your desired data source
from the dropdown menu, assign a variable name, and set the max results.

The Query Template

The query template is the search query that will be run against your data source.
You can use a User Input block to collect the user's query and then reference that
variable in the query template.

Note: You must provide a query for this block to work.


Referencing the Query Result

Now that you are running this query in your automation workflow, you can
reference the results of that query via its assigned variable name.

How to use Data Source Explorer


Learn how to use the Data Source Explorer at the end of your automation workflow
Data Source Explorer

The Data Source Explorer is an end state automation block that allows the user to
view the chosen data source while interacting with the AI.
Create A Data Source Explorer Block

To create a data source explorer block, tap on the green block at the end of your
automation workflow. Then, in the configuration panel on the right, select Data
Source Explorer from the behavior dropdown.

Chat Settings

You have the option to create a system introduction that will appear at the
beginning of the chat session.

Message Processing

Message Processing allows you to automatically query a data source using


Retrieval Augmented Generation (RAG). This will allow you to provide additional
context to each user message.

NOTE: If using Retrieval Augmented Generation (RAG), you may include variables
inside of your Message Processing Template.

How to use Revise Document


Learn how to add the Revise Document block to the end of your automation
workflow
Revise Document Block

The Revise Document block is an end state automation that allows the initial
output to be directly editable by the user via a Rich Text Editor. The user can also
generate changes to their document with an AI powered tool.

Create a Revise Document Block

First, tap on the green block at the end of your automation workflow. Then, from
the behavior dropdown menu, select Revise Document.

Document Settings

Revision Template

The Revision Template is a default prompt used to generate document revisions.


You can use the variable {{selectedtext}} to reference the users revisions and
{{instructions}} to refer to the users instructions.

System Introduction

The System Introduction is an optional message that will be displayed to the user
at the top of the initial output.

Message Processing

Message Processing allows you to automatically query a data source using


Retrieval Augmented Generation (RAG). This will allow you to provide additional
context to each user message.

NOTE: If using Retrieval Augmented Generation (RAG), you may include variables
inside of your Message Processing Template.

How to use the Logic Block


Learn how to use AI to automate decision making inside your workflow.
Overview

The Logic Block is an automation block that lets you dynamically


route users through an AI workflow based on specific conditions. It
provides a more intuitive approach to evaluating expressions and
opens up possibilities for innovative applications and workflows.
Configuring the Block

To create a new Logic block, tap the (+) icon anywhere on the
automation canvas and select "Logic" from the dropdown menu.
From here you can configure the two main settings of the block.

Conditions

Conditions act as "if-then statements" that allow you to route user


inputs and other information to specific actions.

Condition Writing "Best Practices"

Q. 1.Write conditionals as the action/decision you want the AI to make. Avoid


using using personal pronouns such as I or You.
Q.

Bad Example
I want the {{lead}} to be valauble.
Good Example
The {{lead}} is valuable based on the following: {{context}}
Q. 2.Use quotations around your variables when inserting them directly into the
sentence of your condition.

Example
The "{{lead}}" is not valuable based on the following: {{context}}
Once your conditions are written you can route then to other blocks
by tapping the condition, and then tapping the block you'd like to
send it to.

Logic Engine

The Logic Engine is essentially a prompt that take in the value of the
conditionals and evaluates them to return an active result.

NOTE: The Logic Engine contains open-source settings available on


GitHub. Contribute your own here.

How to Use Multiple AI Models inside of a Single


MindStudio AI
Learn how to integrate multiple models into a single AI app workflow

Overview

In this guide, you will learn about the ability to override model settings on a per
message basis. This feature will allow you to customize the model settings for
individual messages within your application.
Steps

Create Send Message Block

Add a "send message" block by tapping the (+) icon the automation canvas.

Override Model Settings


Once you added your message, navigate to the model settings and select the
model you'd like to use.

From here you can adjust the temperature, max response size and whether or not
you'd like to include the prompt.

NOTE: Changing these settings will only affect the model used on this block and
not inherit the entire project.
Optimizations and Use Cases


Allow for optimizations beyond raw output quality, such as speed and
performance.

Highlights the use of smaller, faster models for initial automations and deeper
analysis using larger models.

Useful for selecting the most suitable model for specific sub-tasks.

Aids in breaking apart the reliance on a single model for an entire application.

Conclusion

By customizing settings for individual messages, developers can optimize


performance and tailor model selection to specific sub-tasks. This feature
enhances the versatility and efficiency of natural language processing
applications.

Add Files to a Data Source


Learn how to add files to a Data Source in MindStudio.
Overview

Data Sources allow you to query multiple files at a time and save the those results
as a variable that can be refernced elsewhere in your app. In this guide, you will
learn how to add multiple files to a single data source.
Navigate to the Data Sources Tab

First, navigate to the data sources folder in the resources panel. Select the data
source you add like to add a file by opening the folder.
Upload a new data source

Tap the (+) icon under you current file to add an additional file.

View that your new file is available

Navigate back to the resources panel. When you open your data source folder, you
should see all of your existing files.

You can view all the elements of your new file by selecting it from the menu.

Top 4 Mistakes People Make Using Data Sources


Learn the do's and don'ts of using Data Sources in MindStudio.

Top 4 Data Source Mistakes

○ ◦ ◦ ◦ 1.Not referencing the Data Source

Just because you have uploaded a Data Source does not mean that the AI
automatically knows everything about that Data Source. Make sure you
are referencing that Data Source in Message Rules or Automations.
2.Using corrupted/bad files

When uploading a Data Source, not only should you make sure that the file
type is correct, but that the text of the file is being properly extracted.

By tapping on Extracted Text from the top bar of the modal, you can see
whether or not the file is being properly uploaded.
3.Query Data block issues

You must use the Query Template to instruct the AI about the specific
query that is being made. If the AI doesn't know what to query, there will
be no output.
4.Overloading the Prompt

Referencing a Variable multiple times can confuse the AI and output non-
desired results.

How to use Custom Functions in MindStudio


Everything you need to know to start running custom JavaScript functions
Overview

Custom functions are a way to execute powerful JavaScript code within AI


workflows, enabling various functionalities such as fetching data from the web,
interacting with third-party APIs, and much more.
Accessing Functions

To access functions, add a new function block on your automation canvas. To do


this, tap the (+) on the automation canvas and choose Run Function.

All functions can be found in the "Functions" tab, located in the navigator on the
left-hand side of the screen.

NOTE: You can choose from the functions you have already created or
community-sourced functions created by other developers.
Creating Custom Functions

When creating your own functions, you will find three sections to configure:
Code Tab

This is where you can write your own custom JavaScript code to execute within
your AI workflow.

Configuration Tab

In this tab, you can define a no-code interface that AI creators can fill out when
using the function in their own AI. Additionally, you can set the block style in this
tab to make function blocks easily recognizable within the automation canvas.

Test Data Tab

This tab allows you to test your functions and view the console response.

Submitting Functions for Community Use

You have the option to submit functions you have created for community use via
GitHub. By right-clicking on the function and selecting "Submit Template," you can
open the GitHub page to submit your pull request.

Expanding AI Capabilities
Custom functions are a foundational feature that significantly expand the
capabilities of AIS in Mind Studio.
With custom functions, developers can create powerful and customizable
functionalities that AI creators can integrate into their workflows.

How to use the Zapier Webhook Function Block


Learn how to use the Zapier Webhook block in your Automation workflows
Overview

With the Zapier Webhook block, you are able send input data from your app
directly to Zapier, allowing for additional integration within your Automation
workflow. You must have a Zapier account in order to use the function block.
Create A Zapier Webhook

To create a Zapier Webhook, tap the (+) icon the automation canvas. Then select
Run Function from the menu.

Under Community Functions, select Zapier Webhook from the function blocks list.
Configure the Function

In the configuration panel on the right, you will need to provide the function with
two parts:

Zapier Webook URL

This is the URL provided to you by Zapier when you create a Webhook trigger.
Input

The input is the raw data sent from your app to Zapier. You can use this field to
send data via an input using a variable.

NOTE: Make sure to type in your variable using double braces.


Example: {{MyVar}}

How to use the Mailchimp Function Block


Learn how to integrate your MailChimp account into your automation workflow
Overview
The Mailchimp function block allows you to collect leads and import them into
your Mailchimp mailing list from your app's automation workflow.

NOTE: You must have a Mailchimp account to use this function block.
Create a Mailchimp function block

To create a Mailchimp block, tap the (+) icon on the automation canvas and select
Run Function from the dropdown menu.

Locate the Community Functions button and select Mailchimp from the list.
Configure the Mailchimp Block
API Key

Retreive the API key from your Mailchimp account by locating your profile page
and then selecting Account and Billing.
Next, under the Extras tab, select API Key.
Scroll until you find the Create A Key button. Tap the button, give your API key a
name, and then generate the key.
Copy the generated key and paste it into the API key section of the configuration.
List ID

To find the list ID, go to Audience then All Contacts.


Under Settings choose audience names and defaults.
From there you can copy the Audience ID and paste it in the required field.
Data Center

To locate the Data Center, check the end of you API key. For example, if your in
the US, the Data Center might read "us10".
Once you have found the Data Center, paste it into the required field.

NOTE: The Data Center is your API endpoint URL and will depend on where you
are based geographically.
Input

The input is the raw data that is being sent to Mailchimp.


To use the block correctly, you'll want to set up a user input that collects a users
email address. Provide that input with a variable name and then add the variable
name to the input field.

NOTE: Make sure to use double braces when adding in your variable name.
Example: {{MyVar}}

How to use the DALL-E Image Generation Function Block


Learn how to use DALL-E to generate images inside of your app workflows

Overview

In this guide, you will learn how to generate images and include them in your AI's
output. In order ot use this function, must have access to an OpenAI account with
an API key.
Steps

Configure OpenAI API Key


Navigate to your OpenAI account and access the API Keys section.

Create a new secret key and give it a name.

Copy the generated API key to your clipboard.

Select DALL-E Model


In the Mind Studio, add a new "Run Function" block.

In the Community Function section, select "Generate Image."

Choose the DALL-E 3 model or leave it blank to use DALL-E 3 by default.

Generate Prompt for DALL-E


Create a new "Send Message" block to generate a prompt for DALL-E.

Customize the prompt to be eye-catching and relevant to the topic.


Assign the prompt to a variable, Example: {{image_prompt}}.

Use the DALL-E Block


Inside the DALL-E block, use the {{image_prompt}} variable to prompt DALL-E
for image generation.

Save the output variable, which will store the image URL.

Display Image to User


Create another synthetic message using the "Send Message" block, with the
sender being the system.

Include the image URL in the message using markdown formatting.

Test the Image Generator


Open the function and navigate to the test data tab.

Use the API key and a sample prompt to run the function and verify image
generation.

Conclusion

In conclusion, utilizing the DALL-E image generator enhances the visual appeal of
the AI-generated content in Mind Studio.

How to Embed Your AI App


Learn how to embed your application on to third party websites
Enable Embed Functionality

Once you've completed your AI build, you can enable the embed capability by
tapping on the root file from the resource panel and then select Embedding
under the Access section.

Next, tap the Enable button to open the embed settings.


NOTE: Embedding is only available to those with a premium YouAi subscription.
Configure Embed Settings

Authorize Domains

To authorize the domain, paste your website URL into the space provided and
tap save. This will give your website access to display the embed.

You are able to include multiple URL's if you'd like to embed your app onto multiple
sites.
Embed Code

The Embed Code block displays the code used to embed your app. Copy the
code by tapping the copy button in the top right of the code block.

Next, you'll want to paste that code into an embed block inside of your website
builder. Make sure you place the embed block in the desired location you'd like the
app to appear on your website.

NOTE: Creating the embed will vary depending on the website builder you use.
Make sure to check the rules offered by your website provider.
Advanced Embed Code

Advanced embed code allows you to include your users user ID if they are already
logged in to your site. You can also change the target frame ID or enable more
verbose logging.
You can access this code by tapping on the Show Advanced Configurations
button.

Publish

Once you've completed these steps, publish your website to view the embedded
app. For apps embedded on your website, users DO NOT have to login in to gain
access.

NOTE: The embedded app can only be viewed when you have published your
website. You will not be able to view the embed inside of your website builder.

Customize your AI Branding & Styles


Use custom styles to transform the look and feel of your AI app
Overview

In this guide, you will learn how to fully customize the styles and branding of your
AI applications.

NOTE: To modify custom styles and branding, you will need to be subscribed to a
Pro or Business subscription in Mine Studio.
Accessing Custom Styles

To access custom styles, select the Root file from resources panel.

Then, navigate to Style and Branding from the General menu and tap the toggle to
enable custom styles.

Modifying Styles

Once you have enabled custom styles, you can start customizing various elements
of your AI app's appearance.
Background Color

You can change the background color to match your brand's color palette. Copy
the hex codes from your preferred color palette and paste them into the app.

Alternatively, you can use the Color Picker tool to select colors, modify opacity,
and adjust the hue as needed.
Text Color

Customize the text color to ensure optimal readability. Choose a color that
complements your background color.

Buttons and Accents

Modify the color of buttons and other accents to create a cohesive visual
experience.

Fonts

Change the font of your AI app to align with your brand's typography guidelines.
Select a font that represents your brand's personality and enhances readability.

Corners
Customize the corner style of various elements, such as buttons and containers,
to add a unique touch to your AI app's design.

Preview Changes

As you make changes to the styles, you can see a live preview on the right-hand
side of the screen.
The preview will update in real-time to showcase the transformed look and feel of
your AI app.

Publish and Apply Changes

Once you are satisfied with the custom branding, click on the "Publish" button.

After publishing, refresh the page to see the updated changes applied to your AI
app. All the theming changes made will also be reflected in any embedded AI
applications.
Conclusion

Custom branding allows you to completely transform the look and feel of your AI
apps. By changing colors, fonts, and styles, you can create a unique and branded
experience. This feature is easy to use and helps your AI applications stand out or
blend seamlessly into your website's branding.

Configure Payment Settings


Connect a Stripe account and start accepting payments.
Configure Payment Settings

In this guide, we'll walk through the steps of enrolling in the YouAi Developer
Program and setting up Stripe to start accepting payments for your app.

NOTE: You must be subscribed to the Pro or Business tier in order to charge for
use of your app.
Locate Payments

First locate your payments by going to Account Settings then Developer Settings.

Set up Stripe via YouAi Developer Program

Next, connect your stripe account to enroll in the YouAi Developer Program.
Select "connect Stripe account" and follow the steps.

Make sure your status has changed to "Enrolled". You can update your payment
settings at anytime by opening your Stripe Dashboard.

Types of Data That Pass Through the AI


Learn about the different types of data that pass through the AI and where to
place it in your application
Overview

Raw data provides the context for the AI's functionality and how it interacts with
the user. In this guide, we'll discuss the different types of data that an AI might
collect and where you will want to capture that data when building your
MindStudio app.
Types of Data

Global Data

Global data refers to data that hardly ever changes and is built in to the
knowledge base of your AI.
This data is captured as the context you provide to your AI in the form of a prompt.
You can use the prompt to define the role of your AI and provide step by step
instructions on how it should perform.
Global data can also include uploaded data sources that don't need to be
continuously referenced throughout your automation workflow. For example,
providing the AI with a data source that includes company information like a
mission statement, and brand documentation would be considered global data.
Examples of Global Data: Company info, brand documents, training materials,
and guides.
Onboarding Data

Onboarding data refers to data that is consistent but might need to be edited by
your users later on. This data should be captured in the onboarding flow of your
app, in the form of user inputs.
It is recommended that onboarding data be grouped under a single variable that
can be referenced in the prompt or in a Send Message automation block.
Examples of Onboarding Data: Sales team preferences, market trends, and
writing samples.
Runtime Data
Runtime data refers to data that changes every session and will need to be filled
out each time a new session starts.
This data is captured in the automation flow of your app via user inputs and
data queries. Runtime data is typically the last piece of raw data passed through
the AI before a chat session.
Examples of Runtime Data: Customer info, Point of Contact info, and specific
data queries.
Conclusion

Knowing where and at which point to provide context and capture user data is an
integral part of the app creation process. It ensures that the AI provides your users
with the correct outputs and functions efficiently. Remember to always test your
AI's and experiment with different flows to find what works best for your use case.

AI App Style Guide


Learn how to make your AI Apps stand out on the YouAi marketplace and when
sharing the link anywhere.
Overview

This guide contains guidance and best practices that can help you design a great
AI Details Page, graphics, and icons for any AI built using MindStudio.

NOTE: This guide highlights best practices for the YouAi marketplace - if you are
embedding your AI app elsewhere, some of these suggestions may not apply.
Branding your AI App

Your AI apps should express their brand in ways that make them feel instantly
recognizable while giving your users a consistent experience.
Quick Guidelines

Your AI App branding should:



Be based on its purpose or utility.

Be clear and easy to identify.

Have a consistent voice

Include language that feels familiar to the people using it

Be simple and easy to follow


Your AI App branding should NOT:

Promise features that don't exist

Rely on "clickbait" language in order to attract new users

Writing

The words you choose to describe your AI app are vital to your user experience.
When creating an AI app you have direct control over the AI App Name, Tagline,
and Description.
AI App Name

The AI App Name plays a critical role in how users discover it. Be sure to choose a
simple, memorable name that is easy to spell and speaks to what your app does.
Avoid names that use generic terms or are too similar to an existing app.

Max Length: 32 characters or less

Tagline

Your tagline is intended to summarize your AI app in a concise phrase. In the


tagline, you should try to clearly explain the value of your AI app in detail, describe
what the AI app does for the user, or highlight features and use cases for your
app. Avoid using generic taglines like "Best AI For You".
Your Tagline appears below your AI App Name in the YouAi marketplace and on
any Share Card when sharing the link to the app across the internet.

Max length: 60 characters or less

Description

Your description appears on your AI App Details page. The ideal description
highlights the key features and functionality of your app. The description can be
as long as you'd like, and its encouraged to write a few paragraphs in order to
properly fill out the page.
Your description should be written in the tone fo your brand and should use
terminology that your audience will understand.
Your description is written using a text editor that allows you to:

Select headings (H1, H2, H3, H4, Paragraph)

Created bulleted lists

Add images

Include external links

Tips for writing your Description:

Highlight key talking points of the AI App?


Describe the problem that your AI App Solves
Describe the output. What do you get when you use it?
Describe how to use it. How does it work?
Describe any key benefits. What other benefits will you get from using it?
Utilize Header sections to separate ideas.
DO NOT use unnecessary keywords in an attempt to solve search results.
DO NOT include specific pricing in your description – pricing is shown on the
page and your description may not be accurate.
DO NOT promise features that don't exist.
Images & Video

Use images and video to demonstrate the features and functionality of your AI
app. Images and video are for all audiences, so they must be appropriate for
people of all ages.
Share Card (Open Graph)

Share Card Size: 1200x630 pixels.


Your Share Card is on of the first things that any user will see before using your
app for the first time, so its vital that it makes a strong first impression that
communicates your app's quality and purpose.
Icon

Icon Upload Size: 512x512 pixels


When someone chats with your AI, your icon is the AI's profile picture. Your icon is
the primary image a user will see when interacting with your AI app. Avoid
unnecessary details to be sure that the icon is legible in all sizes.
Icon Tips:

Use a solid color background or simple gradient


Use a single recognizable image in the center of your icon. This will ensure
that it is legible at sizes as small as 48x48px.
Avoid using a white background (#ffffff)
Image gallery

Gallery Image Size: 1200x630


You may display up to 10 images in your AI details page to demonstrate your AI
App's features, functionality, and use cases. Be sure to use your AI App's UI to
visually communicate the experience for new users.
Gallery Image Tips:

Create an image that shows some examples of AI responses


Create an image that show some use cases
Create an image that highlights key benefits to using your AI App
Do not use the Open Graph image as part of your Image Gallery
Preview Video

Video Aspect Ratio: 9ƒ16 (landscape)


Video Recommended Size: 1920x1080 pixels
Video Recommended Length: less than 2 minutes
Similar to and image in your image gallery, your preview video should demonstrate
your AI App's features, functionality, use cases, and key benefits.
AI App previews do not autoplay with muted audio when users preview your App
Details Page. Make sure that the first few seconds of your video are visually
compelling.
Example #1: Chat Response Message Crafter
Open Graph Image, AI App Name, Tagline for this AI App
Element
Grade
Notes
Open Graph Image
BAD
Image is unclear
Image does not visually communicate the AI Apps functionality
Image does not catch the eye
AI App Name
OKAY
Name somewhat describes app functionality
Name is too general and vague
Name is too long
Tagline
BAD
Tagline is too general and vague
AI Details Page, App Icon
Element
Grade
Notes
AI App Icon
BAD
Image is not clearly legible in smaller formats
Image does not visually communicate the AI Apps functionality
AI Description
BAD
Description does not highlight use cases or key benefits
Description does not clearly describe how it works.
Description is vague.
Description is too short.
Image Gallery
BAD
No images uploaded
Preview Video
BAD
No preview video uploaded
Example #2: CleverContent AI
Open Graph Image, AI App Name, Tagline for this AI App
Element
Grade
Notes
Open Graph Image
OKAY
Image is clear, simple, and easy to understand
Image relates to functionality of AI App
Image uses too much text
Subtitle on image is hard to read
AI App Name
GOOD
Name somewhat describes app functionality
Name is unique, easy to spell, and easily recognizable
Tagline
GOOD
Tagline is describes app functionality
Tagline is not too long
AI Details Page, App Icon
Element
Grade
Notes
AI App Icon
GOOD
Image is clearly legible in smaller formats
Image visually communicates the AI Apps functionality
AI Description
GOOD
Description highlights use cases or key benefits
Description clearly describes how it works.
Image Gallery
GOOD
Images visually communicates app functionality
Images visually communicate key benefits
Preview Video
GOOD
Preview video shows how it AI App works
Conclusion

Every element of your AI App Store has the power to drive downloads of your app.
Use these written and visual elements to help customers discover your app and
engage them through thoughtfully crafted content on your AI Details page.

You might also like