MindStudio Documentation
MindStudio Documentation
MindStudio Documentation
●
Fine-tune and publish an AI-Powered app using an intuitive no code interface
Overview
Creating a New AI
To create a new AI, navigate to “My AIs” and then tap “New AI”. This will open
MindStudio, our Integrated Development Environment (IDE) for creating AIs.
In the bottom left you can find quickstart links to tutorials, pre-built templates, the
Prompt Generator Engine, and more.
The Prompt
The Prompt is the set of initial system instructions that you provide to your AI. The
Prompt allows you to define your AI's role and how it is meant to assist the user.
Simply type in your instructions or use AI to generate new prompts using our open
source Prompt Generator Engine.
Automations
Data Sources
Data Sources allow you to integrate custom data to enhance the capabilities of
your AI. When you add a new Data Source, you allow the AI to reference that data
before generating a response to your users.
Integrating a custom data source allows the AI to leverage additional information
and domain-specific knowledge beyond its default training data. This provides a
simple way to customize the AI's knowledge for your specific needs.
Variables
Variables are used to represent parts of data that we pass through the AI.
Variables can be recognized by the AI in both the prompt and automation sections
of the editor.
Adding double brackets around your variable name will allow the AI to recognize
that variable. Example: {{myVar}}
Model Settings
Different LLMs have varying degrees of productivity. Depending on the use case
of your app, some models might perform better than others.
MindStudio allows you to choose the best model for your particular needs.
Publishing
To configure the publish settings, click on the Root file on the left panel.
When writing your Prompt you may test the response output by using the Prompt
Tester on the right. You can also quickly troubleshoot your app by using the Error
and Debugger tab on the left panel.
To preview the AI without publishing, you can open the AI in draft mode by tapping
on the Open button at the top right and selecting Preview Draft.
AI Dashboard
Every app comes with an individual dashboard that allows you to track your total
earnings, user counts and more.
The Prompt is the set of initial system instructions that you provide to your AI. The
Prompt allows you to define your AI's role and how it is meant to assist the user.
You can locate the prompt section by navigating to the Main.flow file and then
tapping on the Prompt tab.
Prompts are written using natural language. There is no code knowledge required
to write out your prompt.
NOTE: To improve readability for collaborators or for later editing, you may
optionally write your prompt using Markdown. Check out the Markdown
Cheatsheet for more information on using Markdown.
When you first create a new AI, you'll see the Quickstart menu panel in the bottom
left of the prompt environment. These tools are meant to assist you in writing your
prompt if your not sure where to start.
The Default Starter Prompt will import a basic prompt to help get started with
writing your own.
Browse Templates
The Browse Templates button will open up a modal of pre-built templates to help
with specific use cases. You an import the template and then edit the prompt to fit
your needs.
Generate Prompt
This will open the Prompt Generator Engine where you can use AI to generate the
prompt for you. Simply type in your apps function and let the AI do the rest.
NOTE: You can also link to video tutorials or browse our Discord community to
learn more about building AIs.
The content of your prompts will vary for each AI depending on your AI's specific
use case and requirements.
When writing your prompt, it's good practice to start by defining your AIs role and
then giving it a task to complete. To distinguish between the AI and the user, you
may want to refer to your AI as “Assistant” and the user as “Human”. Also, make
sure to use a third person POV when writing out your prompt instructions.
Example: The Assistant is a Youtube Metadata Generator that assists Human with
generating Youtube titles and descriptions for their videos.
Properly formatting your prompt can drastically improve your AIʼs responses You
can break up your Prompt into sections and allow each section to describe a
different element of how the AI should act.
For example, you can use a number sign to create a header and then create a
section called Assistant Output. You can then create line items of all the elements
youʼd like the AI to output at the end of the workflow.
You can follow the same steps for other sections of the prompt.
NOTE: Make sure not to overload the AI by adding the same variable multiple
times throughout the prompt. This will provide the AI with superfluous context,
and may result in undesirable responses.
You can also inject additional context from variables into your Prompt by using
double braces and the variable name.
Training Date: The Training Date column displays to which date the LLM has
been trained on “the Internet at large” that the LLM can access. The more
recent the training date, the more currently available data upon which that
LLM can process, parse, and evaluate in preparation for its response.
MindStudio Plan: The MindStudio Plan column displays the lowest plan that
MindStudio supports that LLM.
Anthropic
Free
Google
Free
Meta
Free
Llama-2 70B Chat
Early 2023
Fine-tuned for coding and codebase usage.
Free
Mistral AI
Free
OpenAI
Is less restrictive.
Free
General Use Case Strengths and Drawbacks in Specific LLMs
LLM Strengths
MindStudio allows you to use multiple LLMs within one app. Take advantage of the
LLM that best addresses the need for each task.
For example, general usage LLMs can do the following well:
●
Multimodal: They can process text, speech, and images.
Larger data sets: They can leverage the knowledge and data from a large and
diverse set of data.
However, if writing is critical to the task that mimics human writing well in regards
to fluidity and style, without repeating phrasing, then only particular LLMs do that
well.
As a best practice, consider these strengths for the following LLMs when planning
your AI-powered app:
LLM Name
Description of Strength(s)
Claude 3 Opus
●
Best mimics human writing, including creative writing and copywriting.
Articulates thoughts clearly with the most human-like style.
Mistral AI 7B Instruct
●
Lowest cost usage, yet performs well. Is also the less restrictive in response
content.
Avoid the drawbacks that specific LLMs have. For example, despite the strengths
of the general usage LLMs, they do have drawbacks:
●
Policy restrictions: Some companies restrict how their LLMs may be used.
This may become relevant for your app.
Non-specialized: General usage LLMs are less efficient and less effective
than LLMs specialized to perform particular tasks or domains.
As a best practice, consider these drawbacks for the following LLMs when
planning your AI-powered app:
LLM Name
Description of Drawback(s)
Claude 3 Opus
Among the most restrictive LLMs. Avoids controversial topics.
Anthropic Claude 2.1
Among the most restrictive LLMs. Avoids controversial topics.
Mistral AI 7B Instruct
Permits NSFW content.
Google Gemini Pro
Among the most restrictive in acceptable requests. Diversity and inclusion policies
can generate inaccurate responses regarding historical figures.
Context Window Size
This section outlines the size of the context window for each LLM MindStudio
supports. The context window is the number of tokens that the LLM accepts as
part of an AI prompt. The more tokens a LLM supports, the more information or
data that LLM can process from a prompt. A token is approximately four
alphanumeric characters.
If your AI-powered app must send a large amount of data as part of a prompt,
especially within a Send Message block, select an LLM with the largest context
window size within your MindStudio plan. If your prompt includes a Variable that
stores a large amount of data or references a Data Source from which to process
its data, context window size becomes very important.
The following table lists LLMs in order of largest context window size from top to
bottom.
LLM Name
Context Window Size (Tokens)
MindStudio Plan
Claude 3 Opus
200,000
Pro and higher
Claude 2.1
200,000
Pro and higher
OpenAI GPT 4 Turbo
128,000
Pro and higher
Anthropic Claude Instant 1.2
100,000
Free
Google Gemini Pro
30,720
Free
OpenAI GPT 3.5 Turbo 1106
16,385
Free
OpenAI GPT 4
8,192
Pro and higher
Google PaLM 2
8,000
Free
Meta Code Llama
4,096
Free
Meta Llama-2 13B Chat
4,096
Free
Meta Llama-2 70B Chat
4,096
Free
Mistral AI 7B Instruct
4,096
Free
Mistral AI Mixtral 8x7B Instruct
4,096
Free
OpenAI GPT 3.5 Instruct
4,096
Free
Quality of Responses
Some tasks in your AI-powered app may require that the response be of high
quality: that it is accurate and complete. For example, if the LLMʼs task is to
perform a legal document analysis on a legal brief, then the quality of response is
a priority.
The following table lists LLMs in order of quality-of-response ranking from top to
bottom.
LLM Name
MindStudio Plan
OpenAI GPT 4 Turbo
Pro and higher
OpenAI GPT 4
Pro and higher
Mistral AI 7B Instruct
Free
Claude 3 Opus
Anthropic Claude 2.1
Pro and higher
Mistral AI Mixtral 8x7B Instruct
Free
OpenAI GPT 3.5 Turbo 1106
Free
Google Gemini Pro
Free
Anthropic Claude Instant 1.2
Free
Meta Llama-2 70B Chat
Free
Meta Llama-2 13B Chat
Free
Google PaLM 2
Free
Meta Code Llama
Free
OpenAI GPT 3.5 Instruct
Free
Response Time
This section outlines response times for LLMs that MindStudio supports.
Response time is measured as throughput is the rate at which the model can
process and generate text or responses in a given timeframe.
How much data is the LLM processing for a task? If the Send Message
block includes a large amount of data, such as the content stored by a
Variable or if the LLM references the vector database for a Data Source, then
response time is affected regardless of the LLM performing that task.
The following table lists LLMs in general order of throughput from top to bottom.
Throughput is measured as the number of tokens (tks) per second.
LLM Name
Throughput
MindStudio Plan
Anthropic Claude Instant 1.2
72.118
Free
Mistral AI 7B Instruct
69.238
Free
Mistral AI Mixtral 8x7B Instruct
69.238
Free
Google Gemini Pro
67.027
Free
Meta Llama-2 13B Chat
35.714
Free
OpenAI GPT 3.5 Turbo 1106
30.574
Free
Meta Llama-2 70B Chat
22.579
Free
OpenAI GPT 4 Turbo
25.126
Pro and higher
OpenAI GPT 4
19.563
Pro and higher
Anthropic Claude 2.1
14.144
Pro and higher
Claude 3 Opus
-
Pro and higher
Google PaLM 2
-
Free
Meta Code Llama
-
Free
OpenAI GPT 3.5 Instruct
-
Free
Response time benchmarking changes rapidly. Furthermore, semiconductor and
inference technology improves constantly that affect response times. Visit YouAIʼs
Discord server for updates.
MindStudioʼs debugger checks the usage and response times for all LLMs your AI-
powered app uses. See Debugger.
What is a Variable?
The following lists the organizations that receive the most funding for their LLMs,
from top to bottom. Select a hyperlink to view the MindStudio-supported LLMs
that organization develops.
●
OpenAI
Meta
Anthropic
Mistral
The organizations that develop both propriety and open-source LLMs receive
funding to improve reliability, usability, and new features. Reliability, stability, and
usability updates are priorities for these organizations, especially those that
provide enterprise-class services. Therefore, selecting an LLM for your AI-
powered app that provides high reliability and frequent updates ensures your own
appʼs reliability after you publish it.
Interacting with LLMs Using Non-English Languages
After you publish your AI-powered app, users may interact with your app using
non-English languages. If the audience and user base for your app predominantly
does not use English, then leveraging LLMs that provide support to non-English
languages is a priority.
The following table outlines the LLMs that support non-English languages.
LLM Name
Non-English Languages
MindStudio Plan
OpenAI GPT 4 Turbo
Spanish, Italian, Indonesian, and other Latin alphabet-based languages
Pro and higher
Claude 3 Opus
Anthropic Claude 2.1
Spanish, Portuguese, Italian, German, and French
Pro and higher
Mistral AI 7B Instruct
Spanish, Portuguese, Italian, German, and French
Free
Variables are assigned values that represent various parts of data that are passed
through the AI. Variables are created, stored, and referenced across various parts
of your AI workflow.
Assigning Variables
Whenever you create a user input, you assign a variable name to that input. To do
this, tap on the plus icon next to the User Input folder in the left bar.
You can give your variable a name as part of the input creation process.
You can also create variables within Automations. From the Automation canvas,
you can create an Send Message block, or Query Data block, and assign them a
variable name.
Assigning Send Message Block Results
First, create a Send Message block by tapping on the plus icon on the automation
canvas. Then, in the configuration panel on the right you can add your message
and then under Message Settings, change the response behavior to "assign to
variable".
Add your variable name and then use that variable to reference the Send Message
in other parts of your workflow.
Assigning Data Source Query Results as Variables
First, create a Query Data block by tapping the plus icon on the automation canvas
and the selecting Query Data. Then, in the configuration panel on the right, select
your data source from the drop down menu and assign a variable name.
Now that query Data block can be referenced in other parts of your app using the
variable name enclosed in double braces.
Referencing Variables
You can reference variables inside of your Prompt and inside of Automation
Blocks.
Variables are referenced in your main prompt or in an automation block by using
double curly braces and the variable name.
Example: {{MyVar}}
Referencing Variables in Your Prompt
You can reference a variable anywhere inside of your prompt by using double curly
braces and the variable name.
NOTE: When using variables inside of your prompt, it's best practice to use the
variable only once. Make sure not to overload the AI by adding the same variable
multiple times throughout the prompt. This will provide the AI with superfluous
context, and may result in undesirable responses.
Referencing Variables in a Send Message Block
You can reference variables in a Send Message Automation by adding the variable
into your message using double braces.
Select the Send Message block, then type your message into the configuration
panel on the right.
When you run an explicit data query using the Query Data Source Block, you must
use the Query Template to instruct the AI to specify the query.
For example, if you want to query a user input against a data source, you can add
that variable name in the Query Template.
Message Processing
If you use Retrieval Augmented Generation (RAG) inside of your Terminator Block,
you may include variables inside of your Message Processing Template.
Custom Functions
Some custom function blocks may be able to both call a variable and assign a
variable within the same block.
When you call a variable, you must make sure to wrap the variable name in double
curly braces.
When you assign the output variable, no double curly braces are needed.
AIʼs read variables as raw data. This means that the placement of your variables
becomes an important factor in your AIʼs ability to accurately understand the
action it is supposed to take.
Bad Example:
Tell me about {{myBigDataSet}} and tell me something interesting.
Good Example:
Tell me something interesting that I might not know about the data set below.
Use the following data set as context for your response:
```
{{myBigDataSet}}
```
Donʼt repeat variables inside of the same area
Do not overload the prompt by referencing the same variable over and over again.
LLMʼs can become easily confused if data is repeated throughout its instructions.
Use variables in a clear and concise manner.
Bad Example:
When you respond, take into account the name of the Human found in
{{aboutUser}}
I also want to know about Humanʼs food preferences from {{aboutUser}}
In {{aboutUser}} the Human refers to his age, take this into account talk to them
like the age provided.
Good Example:
## Response Formatting
When you respond, take into account Humanʼs Name, food preferences, and age.
Tailor your responses to Human.
## About Human:
{{aboutUser}}
Data Sources allow you to integrate custom data to enhance the capabilities of
your AI. When you add a new Data Source, the file is converted to a vector
database the AI can query before generating a response to your users.
Then tap upload files to add the files youʼd like to include in your Data Source. You
are able to include multiple files in a single Data Source.
Once youʼve finished uploading your Data Source, you can tap the arrow to
preview your uploaded file and view the extracted text, raw data, and raw vectors
that were created.
You can access and make edits to your data source by going to the Data Sources
folder in the left panel.
You can reference your Data Source by using a Query Data automation block.
Create a new block by tapping the plus icon on the automation canvas and select
“Query Data”.
On the right hand side you select your desired Data Source from the drop down
menu, give it a variable name, and set the max results youʼd like the AI to query.
In the Query Template, provide instructions on how the AI should query the Data
Source. This query can include variables from other inputs or automations.
Example: I am looking for information about {{topic}}.
NOTE: Uploading a Data Source DOES NOT mean the AI now knows everything
about your file. You must instruct the AI on how to query that file for the AI to use
your Data Source properly.
Send Message Automation
Now that you have saved your query data block as a variable, you can reference
that variable in a Send Message block.
Tap the (+) icon on the automation canvas and select Send Message. In the space
provided type your message.
How to Use Automations in MindStudio
Everything you need to know about building out your AI automation workflow
Automations Overview
You can locate the Automation canvas in the Main.flow file and then tap
Automations.
Onboarding
Onboarding is the initial set of inputs a user will interact with before entering the
main AI workflow.
You can create these inputs in your global settings by selecting the root file then
global settings.
Add your input(s) by tapping the (+) icon in the bottom left of the container. You
can add inputs you've already created or create new ones.
NOTE: You can also create and delete user inputs directly from the User Inputs
folder on the left resources panel.
Automation Blocks
Automation blocks act as triggers for a specific action the AI performs at a certain
section of the AI workflow. You can add an automation block by tapping the (+)
button directly on the automation canvas and select the block you want to use.
Q. 1.Collect input: Collects input directly from a user which can be stored as a
Q.
variable.
2.Query Data: Instructs the AI to reference a specific part of an uploaded data
source. The Data Source can also be stored as a variable and referenced in
other parts of the app.
3.Run Function: This block runs more complex functions powered by custom
Javascript. Check out Custom Functions to learn more about this block.
4.Send Message: This block allows the AI to send a synthetic user message
out of view from the user. This message can reference variables and ask the AI
to output a specific response.
5.Menu: This block creates a multiple choice user input that allows you to
branch responses to different blocks on your canvas.
6.Jump to Workflow: This block allows you to jump from one workflow to
another.
Terminator Blocks
The Terminator blocks are end state actions at the end of the automation workflow
and determines what the user to AI interaction looks like.
Q. 1.Chat: This takes the user into a basic chat functionality between themselves
and the AI.
2.Revise Document: This makes the AIʼs initial output editable for the user
using a Rich Text Editor. The user can also make AI powered edits with a magic
wand tool.
3.Data Source Explorer: This shows the selected Data Source to the user
upon completion of the workflow. A user can simultaneously chat with the AI
while exploring the document.
4.End Session: This ends the session after every initial output. A user can
start a new session by opening up a new thread.
Once youʼve completed the build of your AI and you're ready to ship it, you can
configure the built in publish settings broken up into three parts:
●
General - Provide your app details and build a landing page.
●
You can access the publish settings by tapping on the root file in the left panel.
General Settings
Details
Upload an app icon and a social sharing card that will be displayed anytime you
share the link to your app.
NOTE: This section also gives you the option of adding an image gallery and a
preview video that will both be displayed on your apps landing page.
Landing Page
Create a detailed description and a footer where you can add up to 12 links.
NOTE: The landing page uses a Rich Text Editor, where you can use custom
formatting, link text, add images, and more.
Custom Branding
With custom branding you can choose the amount of white labeling, create
custom color schemes, choose font styles, and more.
Sharing
Remixing: Allow others to make a clone of your AI and build their own.
Password Protect: Restrict your app to users that have a custom password.
Those without the password can still access the landing page.
Embedding
Enable API access to embed your app on third party sites and more.
Check out our video on Embedding apps to learn more.
Pricing
Keep your app free or set a monthly subscription price for users.
Billing Settings
Subscription
Select or change your YouAi monthly subscription plan. All plans are per AI app.
Click here to view plan options.
Invoices
In this guide, we'll be going over the basics of testing and debugging your app in
MindStudio.
Prompt Tester
The Prompt Tester is a chat terminal that allows you to interact with your AI and
test different types of prompts and interactions to make sure your AIʼs thought
process is working properly.
You can access the Prompt Tester in the Main.flow section of MindStudio under
both the Prompt and Model Settings tabs.
NOTE: You can not use the Prompt Tester for automation flows.
Errors Tab
The Errors tab shows a list of errors that have occurred within the creation of the
app. Each error includes a short description of the issue and the section in which
the error is occurring.
Debugger
The Debugger gives you an in depth look into interactions with your app, where
you can find and debug any issues.
Draft State
The Draft state allows you to experience the actual flow of your app. This includes
onboarding, automations and the end state interactions.
The Draft also has its own built in debugger that will show you variables and raw
data as you interact with the app.
Creating multiple workflows within a single app allows for developers to utilize
different AI models, prompts and automation flows to improve the user experience.
In this guide, we'll discuss how to create additional workflows and implement them
into your app.
Creating An Additional Workflow
To create a new workflow, navigate to the root file and tap the (+) icon. Then,
select new workflow from the dropdown menu.
Once you've added you new workflow, configure the prompt for your app, the
automation workflow, and the model settings. You can rename or delete your
workflows at anytime by right clicking and selecting the desired action.
The Jump to Workflow automation block triggers the switch between two
workflows. To use the jump block, tap the (+) icon or right click in the area of the
automation canvas you would like to switch workflows. Then select Jump to
Workflow from the dropdown menu.
Configure
In the configuration panel on the right, select the workflow you want to jump to.
You can create more jump blocks to move back and forth between other
workflows. Where to place your jump block will depend on your use case.
NOTE: Onboarding user inputs will exist globally and remain at the the beginning
of your app experience no matter the number of workflows you have.
Next, tap the Enable API Access button. This action will activate the
API functionality for the app. Upon enabling, you will receive the
App ID and an API key that can be used to make requests.
Making Requests
The primary requests that can be made using the API key are the
ability to run a workflow and loading an existing thread.
Running a workflow
console.log(data.threadId);
Note: Make sure to replace your-app-id and your-api-key with the
APP ID and API Key you obtained earlier.
Loading a thread
console.log(data.thread);
Q.
3.Click Create New AI.
R.
S.
Use AI to generate a Prompt for your app: Let AI generate the Prompt
for your app! Select Generate Prompt and then click Next. See Use AI to
Generate the Prompt for Your App.
Start from a blank AI: See how easy it is to create an AI-powered app
from scratch! Select Blank AI, and then click Next.
What is a Prompt?
Understand how a Prompt instructs AI how to perform a task.
Overview
Contextual clarity: Clearly defining context in your Prompt can help the LLM
generate more consistent and relevant responses.
Structured format: Structuring your Prompt with clear instructions can help
the LLM understand and fulfill your requests accurately.
Overview
The easiest way to write Prompts by using the Prompt Generator Engine built into
MindStudio. The Prompt Generator Engine uses AI to help you create custom
Prompts for your AI app in a matter of minutes.
Use AI to Generate the Prompt for Your App
Follow these steps to use the Prompt Generator Engine to create the Prompt for
your app:
Q. 1.Ensure the following:
1.You have created a MindStudio account.
2.You have created a new AI app from scratch. If you created your new AI
app from a blank AI, then remove any Prompt that displays, and then click
Generate Prompt.
R.
S.
Q. ▪ 4.
2.In What do you want your AI to do?, describe how you want your AI-
powered app to work.
Q.
Write in everyday natural language how you want your app to function.
Describe the actors in your app. AI is one actor. The person using your app is
another actor.
Unordered list: Use an unordered list to describe separate functions your app
is to do.
Ordered list: Use an ordered list to describe functionality that the app is to do
in a specific order.
3. Optionally, from Generation Engine, view which engine will generate your
Prompt. By default, MindStudio's engine, named Default Engine (youai-
default-001), generates your Prompt.
4.After writing your app's behavior and functionality, click Generate. This
button is not enabled until text is entered into the Prompt Generator Engine.
5.Review the generated prompt and make sure all bullet points make sense
within the context of your specific use case.
Example
Below is an example to create an app that is written in two different formats: one is
written as a brief paragraph while the other is written using a list.
Text written in paragraph format to create an app that generates blog posts.
Text written using an ordered list to create an app that generates blog posts.
The Prompt Generator Engine generates the following Prompt from this example
that is ready to use. See how easy that is?
Instead of using AI to generate the prompt for your app using the Prompt
Generator Engine, you can write your own Prompt. When creating an app from a
blank AI, a sample Prompt displays for you.
Follow these steps to write or edit your Prompt:
Q. 1.Log in to MindStudio, and then edit your AI app. The Automations tab
displays by default.
2.Select the Prompt tab.
R.
S.
Q.
Want AI to generate your Prompt for you? Remove the current Prompt, and then
click Generate Prompt.
Context and background that informs the AI without the user providing that
context;
The input the LLM uses to perform the task, such as Variables;
The role the LLM plays when performing the task, such as a blog generator or
training co-pilot;
Constraints to place upon the AI model that restricts what the LLM not to do.
Structure your Prompts to make them more efficient and organized for the
LLM to follow with consistent results. Prompts that contain unorganized
instructions can overwhelm and confuse the LLM produce undesired results.
Structure Prompts to make them more legible for collaborators on your project
or those that remix your project for their own AI-powered apps.
What is a Variable?
This topic describes how to use Markdown in your Prompts for experiences your
app users will appreciate.
Sectional Formatting to Organize Instructions in Your Prompt
Lists
Bolded text
Headers
Create sections within your Prompt to break up information and make your
instructions more clear and concise. To create a section, use a hashtag symbol (#)
to create a header. Anything you type underneath that header will become part of
that section. See Overview how to structure your Prompt into sections.
Create a list to organize important information. Use the following list types
depending on your use case:
●
Unordered list
Ordered list
Sub-list
Unordered List
Example:
Unordered List
- First item
- Second item
- Third item
Ordered List
An ordered list presents information for the LLM to recognize in a specific order to
process. Unlike the unordered list, the ordered list uses a numerical system to list
out line items.
Example:
Ordered List
1. First item
2. Second item
3. Third item
Sub-List
Example:
Sub-List
1. First item
●
Sub item 1
Sub item 2
2. Second item
3. Third item
Bolded Text
In Markdown, you can use bolded text to emphasize the importance of a section of
your Prompt. Bolded text also makes your Prompt more legible for collaborators on
your project.
Create bolded text by using double asterisks (*) around the text to bold.
Less frequently used Markdown syntax can help provide examples to the LLM of
different types of components or how to generate its output. Providing examples
in your Prompt can help ensure that the initial output of your app is consistent with
your instructions.
For example, if you need the initial output of your AI app to be formatted inside a
table, adding an example of a table provides the LLM with the context it needs to
perform that function.
Following are examples of additional Markdown syntax that you can include in your
Prompt:
Table
Syntax | Description |
| ----------- | ----------- |
| Header | Title |
| Paragraph | Text |
Task List
- [x] Write the press release
- [ ] Update the website
- [ ] Contact the media
Link
[Text](https://www.example.com)
Image
![alt text](image.jpg)
For more information on Markdown, see the Markdown Cheatsheet.
What is a Prompt?
Tasks: The Tasks section explicitly instructs the tasks that the app
performs.
Best Practice: Describe upon which actor the LLM performs the task.
For example, does the LLM submit a message to Human (user) or to
another explicitly stated system? If the latter, ensure that the stated
system is explicitly described as an actor in the Purpose section.
Output: The Output section specifies the expected results from the
LLM's processing of the Prompt. This section outlines the specific
characteristics, format, and content that the app user expects in the
AI's response.
Best Practice: Explicitly specify information or content that must be
included in the LLM's response. This section guides the AI regarding
what topics, points, or data should be in the output to ensure it is
relevant and comprehensive. This is particularly important for Prompts
aimed at generating informative or educational content.
Best Practice: Specify the presentation format of the output if
necessary. For example, should the output be a list, table, or a specific
document structure?
Best Practice: Specify the file type to output if necessary. For
example, should the output be in plain text, CSV, PDF, HTML, or other
format?
Keep your Prompts simple and straightforward. State one idea at a time.
Include context in your Prompts as additional information that guides the LLM
accurately.
Include what the LLM is to output, including what it is to generate and in which
format(s).
Include any traits, parameters, and/or constraints that limits how the LLM
performs its task or what it generates in its output.
Keep your Prompts simple and straightforward. Avoid unnecessary complexity that
may confuse the LLM and lead to inaccurate responses.
A common misconception about writing Prompts is that more information means
better results. There are some use cases that may require more complex Prompts
than others, but complexity does not always mean more text.
Adding repetitive lines, run-on sentences, or the same Variable repetitively
overwhelms the LLM and causes issues. To prevent these problems, state one idea
at a time and use Markdown syntax to keep organized.
Compare the examples below of a bad Prompt versus a good Prompt.
Improper Practice
Proper Practice
The following partially-shown Prompt is well written and is organized. Note how
the LLM is referred to as Assistant and the app user referred to as Human.
Best Practice
# The Assistant
The Assistant is a personalized gift recommender that assists Human in finding
the right gift for an individual.
The Assistant understands the individual as: {{giftee}}
## Assistant Output
The Assistant **must always** provide Human with:
1. 5-8 gift recommendations
2. a link to purchase each gift
3. a description of your findings
Variables are a powerful way to inject information into your Prompt that the user
entered into your app. The Variable represents that entered information. However,
how you add those Variables into your Prompt matters.
When an app user provides information via a User Input block, that data is stored
into a Variable. Using this Variable is a powerful and flexible way to instruct the
LLM (AI model) how to perform a task. However, how you add a Variable into your
Prompt is important. When not correctly entered following best practices, the
injected information the app user entered can make the Prompt wording become
grammatically incorrect or unclear, thereby confusing the LLM. Understanding
how to properly write Variables into your Prompt can mean the difference between
a functional and nonfunctional app.
Compare the examples below how not to use a Variable in a Prompt versus how to
do so. In these examples, the Variable is named style. Surrounding the Variable
with two pairs of curly braces injects the data the Variable stores into the Prompt
and replaces the text {{style}}. Note how in the improper example, after the
Variable value is injected into the Prompt, the instruction becomes grammatically
incorrect, thereby making it unlikely the AI understands the instruction.
After the Variable value is injected into the Prompt, the instruction becomes
grammatically incorrect, thereby making it unlikely the LLM understands the
instruction.
Bad Practice
The Assistant is a Copy Writer that assists Human in generating SEO-focused Blog
Posts in the writing {{style}} of Human.
Write the Variable after a colon (:) so that after the Variable value is injected into
the Prompt, the instruction remains grammatically correct and clear.
Best Practice
The Assistant is a Copy Writer that assists Human in generating SEO-focused Blog
Posts in the writing style of Human.
What is a Variable?
When writing your Prompt, address the LLM (AI model) in the third person, not in
second person. Addressing the AI in third person instructs the AI what role to
assume so as to perform the subsequent instruction(s) that follow. Providing the
AI which role to assume sets the context for which data the LLM has at its
disposal.
This improperly written Prompt addresses the LLM in the second person
perspective.
Bad Practice
You are a Youtube Metadata Generator that generates Metadata for Youtube
videos.
This properly written Prompt addresses the LLM in the third person perspective.
Note how the AI is referred to as Assistant, which is also a best practice.
Best Practice
The Assistant is a Youtube Metadata Generator that assists Human in generating
metadata for their Youtube videos.
Save Certain Actions for Automations
Not all instructions belong in the Prompt. As a best practice, use the Prompt to
define the AIʼs role and designated task(s). Specific actions belong in your
Automation Workflow.
Improper Practice
This improperly written Prompt instructs which action the LLM (AI model) should
take. Instead, use a Send Message block in the Automation Workflow that outputs
the response of that message directly to the app user.
Bad Practice
The Assistant is a LinkedIn Post Generator that assists Human with generating a
LinkedIn Post article about: {{topic}}.
Proper Practice
This properly written Prompt does not specify which action to take.
Best Practice
Write a LinkedIn post article about: {{topic}}.
Overview
Test your CoT Prompt and iterate to improve it: Test your CoT Prompt with
different configurations and refine it based on the outcomes. Pay attention to
how well the LLM follows the reasoning steps and adjust your Prompt
accordingly. The steps the LLM outlines should be accurate, relevant to the
problem you are solving, and contribute to solving that problem.
# Purpose
# Example
Refer to the following example how inventory level forecasting was performed last
quarter.
- Product A had a sales volume of 1,000 units. The current inventory level is 200
units. There is an anticipated market trend indicating a 10% increase in demand.
- Expected demand was calculated for the next quarter and adjusted inventory
levels as follows:
1. **Calculate expected demand:** Start with the sales volume of the last quarter
for Product A, which is 1,000 units. Anticipating a 10% increase in demand, the
expected demand for the next quarter would be 1,100 units.
2. **Adjust inventory levels:** Considering the current inventory level of 200 units,
to meet the expected demand of 1,100 units, 900 more units were ordered for
Product A for the next quarter.
# Goal
- The app's goal is to minimize overstock and understock situations to maximize
profitability.
# Output
- Explicitly state the step-by-step reasoning into a structured response.
- Output the response into a table with column headers "Previous Inventory",
"Anticipated Growth", and "Inventory Forecasting", in that order.
- Enter your step-by-step reasoning into the "Inventory Forecasting" column.
Variables are how MindStudio passes raw data through different parts of the app.
In this guide, weʼll be discussing some of the best practices when it comes to
using variables efficiently.
Understand What Your Variables Represent
AIʼs read variables as raw data. This means that the placement of your variables
becomes an important factor in your AIʼs ability to accurately understand the
action it is supposed to take.
Below are examples of a right and wrong way to place your variables:
BAD:
Tell me about {{myBigDataSet}} and tell me something interesting.
GOOD:
Tell me something interesting that I might not know about the data set below.
Use the following data set as context for your response:
```
{{myBigDataSet}}
```
Do Not Repeat Your Variables in the Same Section
Do not overload the prompt by referencing the same variable over and over again.
LLMʼs can become easily confused if data is repeated throughout its instructions.
Use variables in a clear and concise manner.
Below are some examples of a right and wrong way a variable is being used:
BAD:
When you respond, take into account the name of the Human found in
{{aboutUser}}
In {{aboutUser}} the Human refers to his age, take this into account talk to them
like the age provided.
GOOD:
## Response Formatting
When you respond, take into account Humanʼs Name, food preferences, and age.
Tailor your responses to Human.
## About Human:
{{aboutUser}}
Group Inputs When Needed
You can group inputs together under a single variable name that can be
referenced anywhere in your app. This is useful for referencing a collection of
inputs all at once without clouding areas of your app with too much text.
To do this, give each variable youʼd like to group the same name followed by
double brackets ([]). This will place them in numerical order under a new folder
with the same variable name.
When creating variables anywhere in your app, make sure to always double check
the spelling and case sensitivity of your writing.
For example, if you have a variable {{MyVar}} and then reference that variable in
your prompt or automations as {{Myvar}}, the AI will not be able to recognize the
data.
NOTE: Always use the debugger to check any inconsistencies between variables.
To create an input folder, you first need to create a user input. To do this, navigate
to the resources panel on the left and tap the (+) icon next to the User Inputs
folder.
Next, fill out all the details of your user input. In the “Variable” section under
“General” is where you will add your variable name. Type in the variable name
followed by double brackets ([]).
Once finished, you will notice a new folder with your variable name as the title.
Add More Inputs
As you add more inputs, give them the same variable name followed by the double
brackets ([]). MindStudio will automatically store those variables under the same
folder, numbering them in the order in which you made them.
You can now reference those variables individually {{context[#]}} by using their
assigned variable name and number or use the folder name {{context}} as a
variable to reference all those inputs at once.
NOTE: You can change the variable names at any time if you no longer wish to
include a specific user input within that group.
Logging allows you to save user data from a specific input in your apps workflow.
This can aid in debugging or tracking events across your app. In this guide, we'll
walk through how to enable logging and where you can view the logged data.
NOTE: Logging is only available to users that are subscribed to our premium
plans.
Steps
Create a user input
Before we enable logging, we must first create a user input. To do this, tap the (+)
icon next to the user inputs folder in the resources panel, and fill out the
necessary information.
Enable Logging
Once you've filled out the details, scroll to the Advanced section of the user input
page, and from the Logging dropdown select Enabled.
In the Test Value box you can type in a test value that makes it easy to pre-fill
responses when testing in drafts.
NOTE: Inputs with logging enabled will publicly display the message "Responses
will be logged and visible to the developer".
Testing
To test your input, navigate to the preview button and select Preview Draft.
Select the "Use test value" button to quickly pre-fill the input.
View Logs
To view your logs, navigate to your apps Dashboard and select the Logs tab.
From here you'll be able to see all logged responses from that input. This also
includes the date of the response, the User ID, and the Thread ID.
Conclusion
With logging, you are able to see your users responses to specific inputs, allowing
for more detailed insights about the user experience and what you can do to
improve the overall functionality of your app.
Auto arrange takes all of your automation blocks and arranges them in a vertical
line. To use auto arrange, tap the auto arrange icon in the bottom left of the
canvas.
NOTE: You can tap the spacebar on your keyboard to center your blocks.
Changing the Auto Arrange shape
The auto arrange feature also allows you to change the shape of the flow. To
change the shape, click-drag to highlight your blocks and then select the desired
shape in bottom left of the canvas.
You also have the ability to change arrange a section of blocks as opposed to your
entire canvas. to do this, highlight the section of blocks you'd like ot arrange and
then use the same arrange icon to select your shape.
NOTE: You can also select multiple blocks by holding shift and clicking the desired
blocks.
Onboarding prompts are a global set of user inputs that users will interact with
before they enter your apps main workflow. In this guide, we'll take a look at how
to create onboarding prompts and when they are best used in your app.
Create an Onboarding Prompt
Select the entry workflow you'd like to create your onboarding prompts for. If you
only have one workflow, it will be selected by default.
Global Logging
Global Logging allows you to log user transcripts that can be accessed in the
dashboard section of your app. You can enable this feature by selecting Enable
from the dropdown menu.
When creating onboarding prompts, think about the data being collected as data
that is consistent but might need to be edited by your users later on.
Onboarding prompts should collect data that your users won't have to provide
every time they create a new session. This could includes items like sales team
preferences, market trends, and writing samples.
It is recommended that onboarding data be grouped under a single variable that
can be referenced in the prompt or in a Send Message automation block later
on.
Branching capabilities have now unlocked even more potential when it comes to
building out your automation workflows. With branching you can now create
separate paths for your users depnding on their responses. In this guide, we'll
discuss how to implement branching into your application.
Menu Block
The Menu block is a multiple choice user input block that allows you connect
responses to other automations in your workflow.
To use this block, tap the (+) icon or right click anywhere on the automation
canvas and from the dropdown menu select Menu.
Configuration
In the configuration panel on the right, add your prompt question and answer
options. Tap the (+) icon at the bottom of the container to add as many answer
options as you'd like.
Assign Responses
To assign your response to a specific block, tap the circle icon next to your
response. When the automation canvas turns blue select the block you want to
send that response to.
NOTE: You will know that the response is assigned when the circle icon is filled
blue.
Conclusion
The menu automation block opens up the capability to create branching workflows
by assigning responses to different blocks on the automation canvas. This allows
your users to dictate the journey they take through your app and create a more
personalized experience.
The Collect Input block lets you collect raw data inputs at a specific moment of
your automation workflow.
Create a Collect Input Block
To create a collect input block, tap on the (+) icon anywhere on the automation
canvas and select Collect Input from the menu.
Next, in the block configuration panel on the right, tap the (+) icon at the bottom
of the user inputs container.
This will open a modal where you can add an existing input or create a new one by
tapping “Create New”.
Once youʼve added one or more inputs you can rearrange the order, set inputs as
optional or required, and edit them by tapping the arrow icon next to the input
name.
The Send Message block is a synthetic message that is sent to the AI on behalf of
the user.
To create a Send Message block, tap the (+) icon on the automation canvas and
then select Send Message from the menu.
Craft A Message
In the configuration panel on the right, you can type in your message in the space
provided. You can format this message to reference variables such as user inputs,
data sources, and more.
Note: You can also use this message to progress a chat without needing the user
to interact with the AI.
Message Settings
Response Behavior
The response behavior allows you to choose whether you want the AI's response
to the message to be displayed to the user or assigned to a variable that can be
referenced in other areas of your AI workflow.
Sender
The Sender dropdown allows you to dictate where your message is sent. If “user”
is selected, then the AIʼs response will be sent to the user without showing the
content of the message. If “System” is selected, the contents of the message will
be sent directly to the user along with a simulated response.
The Query Data block allows you to run specific queries on a data source, and
assign the results to a variable.
Note: You must have created a Data Source in order to use the block.
Create a Query Data block
To create a Query Data block, tap the (+) icon anywhere on the automation canvas
and select Query Data from the menu.
Next, in the block configuration panel on the right, select your desired data source
from the dropdown menu, assign a variable name, and set the max results.
The query template is the search query that will be run against your data source.
You can use a User Input block to collect the user's query and then reference that
variable in the query template.
Now that you are running this query in your automation workflow, you can
reference the results of that query via its assigned variable name.
The Data Source Explorer is an end state automation block that allows the user to
view the chosen data source while interacting with the AI.
Create A Data Source Explorer Block
To create a data source explorer block, tap on the green block at the end of your
automation workflow. Then, in the configuration panel on the right, select Data
Source Explorer from the behavior dropdown.
Chat Settings
You have the option to create a system introduction that will appear at the
beginning of the chat session.
Message Processing
NOTE: If using Retrieval Augmented Generation (RAG), you may include variables
inside of your Message Processing Template.
The Revise Document block is an end state automation that allows the initial
output to be directly editable by the user via a Rich Text Editor. The user can also
generate changes to their document with an AI powered tool.
First, tap on the green block at the end of your automation workflow. Then, from
the behavior dropdown menu, select Revise Document.
Document Settings
Revision Template
System Introduction
The System Introduction is an optional message that will be displayed to the user
at the top of the initial output.
Message Processing
NOTE: If using Retrieval Augmented Generation (RAG), you may include variables
inside of your Message Processing Template.
To create a new Logic block, tap the (+) icon anywhere on the
automation canvas and select "Logic" from the dropdown menu.
From here you can configure the two main settings of the block.
Conditions
Bad Example
I want the {{lead}} to be valauble.
Good Example
The {{lead}} is valuable based on the following: {{context}}
Q. 2.Use quotations around your variables when inserting them directly into the
sentence of your condition.
Example
The "{{lead}}" is not valuable based on the following: {{context}}
Once your conditions are written you can route then to other blocks
by tapping the condition, and then tapping the block you'd like to
send it to.
Logic Engine
The Logic Engine is essentially a prompt that take in the value of the
conditionals and evaluates them to return an active result.
Overview
In this guide, you will learn about the ability to override model settings on a per
message basis. This feature will allow you to customize the model settings for
individual messages within your application.
Steps
Add a "send message" block by tapping the (+) icon the automation canvas.
From here you can adjust the temperature, max response size and whether or not
you'd like to include the prompt.
NOTE: Changing these settings will only affect the model used on this block and
not inherit the entire project.
Optimizations and Use Cases
●
Allow for optimizations beyond raw output quality, such as speed and
performance.
Highlights the use of smaller, faster models for initial automations and deeper
analysis using larger models.
Useful for selecting the most suitable model for specific sub-tasks.
Aids in breaking apart the reliance on a single model for an entire application.
Conclusion
Data Sources allow you to query multiple files at a time and save the those results
as a variable that can be refernced elsewhere in your app. In this guide, you will
learn how to add multiple files to a single data source.
Navigate to the Data Sources Tab
First, navigate to the data sources folder in the resources panel. Select the data
source you add like to add a file by opening the folder.
Upload a new data source
Tap the (+) icon under you current file to add an additional file.
Navigate back to the resources panel. When you open your data source folder, you
should see all of your existing files.
You can view all the elements of your new file by selecting it from the menu.
Just because you have uploaded a Data Source does not mean that the AI
automatically knows everything about that Data Source. Make sure you
are referencing that Data Source in Message Rules or Automations.
2.Using corrupted/bad files
When uploading a Data Source, not only should you make sure that the file
type is correct, but that the text of the file is being properly extracted.
By tapping on Extracted Text from the top bar of the modal, you can see
whether or not the file is being properly uploaded.
3.Query Data block issues
You must use the Query Template to instruct the AI about the specific
query that is being made. If the AI doesn't know what to query, there will
be no output.
4.Overloading the Prompt
Referencing a Variable multiple times can confuse the AI and output non-
desired results.
○
All functions can be found in the "Functions" tab, located in the navigator on the
left-hand side of the screen.
NOTE: You can choose from the functions you have already created or
community-sourced functions created by other developers.
Creating Custom Functions
When creating your own functions, you will find three sections to configure:
Code Tab
This is where you can write your own custom JavaScript code to execute within
your AI workflow.
Configuration Tab
In this tab, you can define a no-code interface that AI creators can fill out when
using the function in their own AI. Additionally, you can set the block style in this
tab to make function blocks easily recognizable within the automation canvas.
This tab allows you to test your functions and view the console response.
You have the option to submit functions you have created for community use via
GitHub. By right-clicking on the function and selecting "Submit Template," you can
open the GitHub page to submit your pull request.
Expanding AI Capabilities
Custom functions are a foundational feature that significantly expand the
capabilities of AIS in Mind Studio.
With custom functions, developers can create powerful and customizable
functionalities that AI creators can integrate into their workflows.
○
With the Zapier Webhook block, you are able send input data from your app
directly to Zapier, allowing for additional integration within your Automation
workflow. You must have a Zapier account in order to use the function block.
Create A Zapier Webhook
To create a Zapier Webhook, tap the (+) icon the automation canvas. Then select
Run Function from the menu.
Under Community Functions, select Zapier Webhook from the function blocks list.
Configure the Function
In the configuration panel on the right, you will need to provide the function with
two parts:
This is the URL provided to you by Zapier when you create a Webhook trigger.
Input
The input is the raw data sent from your app to Zapier. You can use this field to
send data via an input using a variable.
NOTE: You must have a Mailchimp account to use this function block.
Create a Mailchimp function block
To create a Mailchimp block, tap the (+) icon on the automation canvas and select
Run Function from the dropdown menu.
Locate the Community Functions button and select Mailchimp from the list.
Configure the Mailchimp Block
API Key
Retreive the API key from your Mailchimp account by locating your profile page
and then selecting Account and Billing.
Next, under the Extras tab, select API Key.
Scroll until you find the Create A Key button. Tap the button, give your API key a
name, and then generate the key.
Copy the generated key and paste it into the API key section of the configuration.
List ID
To locate the Data Center, check the end of you API key. For example, if your in
the US, the Data Center might read "us10".
Once you have found the Data Center, paste it into the required field.
NOTE: The Data Center is your API endpoint URL and will depend on where you
are based geographically.
Input
NOTE: Make sure to use double braces when adding in your variable name.
Example: {{MyVar}}
Overview
In this guide, you will learn how to generate images and include them in your AI's
output. In order ot use this function, must have access to an OpenAI account with
an API key.
Steps
●
Navigate to your OpenAI account and access the API Keys section.
●
In the Mind Studio, add a new "Run Function" block.
●
Create a new "Send Message" block to generate a prompt for DALL-E.
●
Inside the DALL-E block, use the {{image_prompt}} variable to prompt DALL-E
for image generation.
Save the output variable, which will store the image URL.
●
Create another synthetic message using the "Send Message" block, with the
sender being the system.
●
Open the function and navigate to the test data tab.
Use the API key and a sample prompt to run the function and verify image
generation.
Conclusion
In conclusion, utilizing the DALL-E image generator enhances the visual appeal of
the AI-generated content in Mind Studio.
Once you've completed your AI build, you can enable the embed capability by
tapping on the root file from the resource panel and then select Embedding
under the Access section.
Authorize Domains
To authorize the domain, paste your website URL into the space provided and
tap save. This will give your website access to display the embed.
You are able to include multiple URL's if you'd like to embed your app onto multiple
sites.
Embed Code
The Embed Code block displays the code used to embed your app. Copy the
code by tapping the copy button in the top right of the code block.
Next, you'll want to paste that code into an embed block inside of your website
builder. Make sure you place the embed block in the desired location you'd like the
app to appear on your website.
NOTE: Creating the embed will vary depending on the website builder you use.
Make sure to check the rules offered by your website provider.
Advanced Embed Code
Advanced embed code allows you to include your users user ID if they are already
logged in to your site. You can also change the target frame ID or enable more
verbose logging.
You can access this code by tapping on the Show Advanced Configurations
button.
Publish
Once you've completed these steps, publish your website to view the embedded
app. For apps embedded on your website, users DO NOT have to login in to gain
access.
NOTE: The embedded app can only be viewed when you have published your
website. You will not be able to view the embed inside of your website builder.
In this guide, you will learn how to fully customize the styles and branding of your
AI applications.
NOTE: To modify custom styles and branding, you will need to be subscribed to a
Pro or Business subscription in Mine Studio.
Accessing Custom Styles
To access custom styles, select the Root file from resources panel.
Then, navigate to Style and Branding from the General menu and tap the toggle to
enable custom styles.
Modifying Styles
Once you have enabled custom styles, you can start customizing various elements
of your AI app's appearance.
Background Color
You can change the background color to match your brand's color palette. Copy
the hex codes from your preferred color palette and paste them into the app.
Alternatively, you can use the Color Picker tool to select colors, modify opacity,
and adjust the hue as needed.
Text Color
Customize the text color to ensure optimal readability. Choose a color that
complements your background color.
Modify the color of buttons and other accents to create a cohesive visual
experience.
Fonts
Change the font of your AI app to align with your brand's typography guidelines.
Select a font that represents your brand's personality and enhances readability.
Corners
Customize the corner style of various elements, such as buttons and containers,
to add a unique touch to your AI app's design.
Preview Changes
As you make changes to the styles, you can see a live preview on the right-hand
side of the screen.
The preview will update in real-time to showcase the transformed look and feel of
your AI app.
Once you are satisfied with the custom branding, click on the "Publish" button.
After publishing, refresh the page to see the updated changes applied to your AI
app. All the theming changes made will also be reflected in any embedded AI
applications.
Conclusion
Custom branding allows you to completely transform the look and feel of your AI
apps. By changing colors, fonts, and styles, you can create a unique and branded
experience. This feature is easy to use and helps your AI applications stand out or
blend seamlessly into your website's branding.
In this guide, we'll walk through the steps of enrolling in the YouAi Developer
Program and setting up Stripe to start accepting payments for your app.
NOTE: You must be subscribed to the Pro or Business tier in order to charge for
use of your app.
Locate Payments
First locate your payments by going to Account Settings then Developer Settings.
Next, connect your stripe account to enroll in the YouAi Developer Program.
Select "connect Stripe account" and follow the steps.
Make sure your status has changed to "Enrolled". You can update your payment
settings at anytime by opening your Stripe Dashboard.
Raw data provides the context for the AI's functionality and how it interacts with
the user. In this guide, we'll discuss the different types of data that an AI might
collect and where you will want to capture that data when building your
MindStudio app.
Types of Data
Global Data
Global data refers to data that hardly ever changes and is built in to the
knowledge base of your AI.
This data is captured as the context you provide to your AI in the form of a prompt.
You can use the prompt to define the role of your AI and provide step by step
instructions on how it should perform.
Global data can also include uploaded data sources that don't need to be
continuously referenced throughout your automation workflow. For example,
providing the AI with a data source that includes company information like a
mission statement, and brand documentation would be considered global data.
Examples of Global Data: Company info, brand documents, training materials,
and guides.
Onboarding Data
Onboarding data refers to data that is consistent but might need to be edited by
your users later on. This data should be captured in the onboarding flow of your
app, in the form of user inputs.
It is recommended that onboarding data be grouped under a single variable that
can be referenced in the prompt or in a Send Message automation block.
Examples of Onboarding Data: Sales team preferences, market trends, and
writing samples.
Runtime Data
Runtime data refers to data that changes every session and will need to be filled
out each time a new session starts.
This data is captured in the automation flow of your app via user inputs and
data queries. Runtime data is typically the last piece of raw data passed through
the AI before a chat session.
Examples of Runtime Data: Customer info, Point of Contact info, and specific
data queries.
Conclusion
Knowing where and at which point to provide context and capture user data is an
integral part of the app creation process. It ensures that the AI provides your users
with the correct outputs and functions efficiently. Remember to always test your
AI's and experiment with different flows to find what works best for your use case.
This guide contains guidance and best practices that can help you design a great
AI Details Page, graphics, and icons for any AI built using MindStudio.
NOTE: This guide highlights best practices for the YouAi marketplace - if you are
embedding your AI app elsewhere, some of these suggestions may not apply.
Branding your AI App
Your AI apps should express their brand in ways that make them feel instantly
recognizable while giving your users a consistent experience.
Quick Guidelines
Writing
The words you choose to describe your AI app are vital to your user experience.
When creating an AI app you have direct control over the AI App Name, Tagline,
and Description.
AI App Name
The AI App Name plays a critical role in how users discover it. Be sure to choose a
simple, memorable name that is easy to spell and speaks to what your app does.
Avoid names that use generic terms or are too similar to an existing app.
●
Max Length: 32 characters or less
Tagline
Description
Your description appears on your AI App Details page. The ideal description
highlights the key features and functionality of your app. The description can be
as long as you'd like, and its encouraged to write a few paragraphs in order to
properly fill out the page.
Your description should be written in the tone fo your brand and should use
terminology that your audience will understand.
Your description is written using a text editor that allows you to:
●
Select headings (H1, H2, H3, H4, Paragraph)
●
Add images
Use images and video to demonstrate the features and functionality of your AI
app. Images and video are for all audiences, so they must be appropriate for
people of all ages.
Share Card (Open Graph)
Every element of your AI App Store has the power to drive downloads of your app.
Use these written and visual elements to help customers discover your app and
engage them through thoughtfully crafted content on your AI Details page.