Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Releases: Mintplex-Labs/anything-llm

AnythingLLM v1.11.1

02 Mar 17:26
e145c39

Choose a tag to compare

Homepage Redesign

The main AnythingLLM homepage has been completely redesigned to be more modern and user-friendly so you can instantly start chatting the second you open the app after onboarding.

homepage

Native Tool Calling

Native tool calling is the best performance and experience for tool calling with your LLM provider and model. If you can enable it, you should.

this only applies to local LLM providers. It has no impact on cloud LLMs like OpenAI, Anthropic, or Azure.

We have completely overhauled how @agent tool calling works. Now, we will leverage the new native tool calling abilities of your LLM provider and model.

What this means for you:

  • You can now run complex, multi-step tool calls with your LLM provider and model.
  • Your model will now continue to work until your final response is generated or determined to be complete.
  • You will get 100x better responses from even small tool-calling models

We have implemented safeguards as well to prevent infinite loops with a maximum of 10 tool calls per response to prevent runaway tasks.

native-tool-calling

Limitations

Most providers do not allow us to probe for if a model supports native tool calling.

The following local LLM providers will automatically support native tool calling if your model supports it:

  • Default Built in LLM Provider (AnythingLLM Default)
  • Ollama
  • LM Studio

For others, you will need to set an ENV variable to enable native tool calling for supported providers.

  • Generic OpenAI
  • Groq
  • AWS Bedrock
  • Lemonade
  • LiteLLM
  • Local AI
  • OpenRouter

This can be set via the PROVIDER_SUPPORTS_NATIVE_TOOL_CALLING environment variable.

PROVIDER_SUPPORTS_NATIVE_TOOL_CALLING="bedrock,generic-openai,groq,lemonade,litellm,local-ai,openrouter"

Lemonade by AMD Integration

lemonade

Lemonade by AMD is an open-source local model runtime that optimizes performance and efficiency for local models (LLM, ASR, TTS, Image Generation, etc.) for all types of hardware including AMD GPUs and NPUs.

We have added first class support so you can use your local models running via Lemonade within AnythingLLM for the best application experience on top of your local hardware.


What's Changed

New Contributors

Full Changelog: v1.11.0...v1.11.1

AnythingLLM v1.11.0

18 Feb 17:08
40853e4

Choose a tag to compare

AnythingLLM Desktop overlay is live!

this is a free & desktop specific feature!

Now, AnythingLLM Desktop has an OS-level and application aware panel that opens in a single keystroke. Seamlessly ingest your current open applications alongside all other chat functionality you use like document chat, RAG, agents, and more.

This panel is such a smoother and more convenient way to use AnythingLLM - we highly recommend this for daily use!

anythingllm-assistant-desktop-promo.1.mp4

What's Changed

New Contributors

Full Changelog: v1.10.0...v1.11.0

AnythingLLM v1.10.0

22 Jan 15:52

Choose a tag to compare

v1 10 0 (1)

Highlighted Changes

AnythingLLM Desktop Assistant is live!

Now, AnythingLLM Desktop is a drop-in replacement for paid tools like Granola, Otter, Fireflies, and more.

  • Runs entirely on your device, can record meetings without joining or summarize arbitrary files
  • Powered by NVIDIA Parakeet + AnythingLLM's on-device orchastration
  • Can call any agent tool, MCP, or anything else you already use with AnythingLLM!
  • Custom summary templates, chat with the transcript, and even speaker identification.
  • "Joined Meeting" Desktop notification to start a new recording with a click. For any meeting software (Zoom, Slack, Discord, Teams, etc)
  • No rate limits, usage caps, or restrictions
preview.mp4

AnythingLLM Mobile is live on Google Play

The Android AnythingLLM Mobile App is live on Google Play now. This syncs with both Cloud/Self-hosted and Desktop versions of AnythingLLM.

AnythingLLM.Mobile.on.Snapdragon.Promo.1.mp4

Notable other changes

  • Removed onboarding "Create workspace" page -> goes straight to home now with new workspace in user native language
  • Refactored Workspace file picker to be more performant
  • Migrated Azure OpenAI to unified v1 api with full agent support
  • Fixed Pagination bug in paperless-ngx
  • Fixed issue where the undocumented YouTube API changed and broke the YT scraper
  • Implemented Cohere as an agent provider
  • A bump of dependency bumps
  • Fixed bug where XSLX files dragged and dropped into chat weren't "visible" to the model
  • MCP fixes for paths on non-Windows machines
  • Docker image bumps and patches for a healthy Scout score (B)
  • Added Error Boundary to UI to prevent white-page crashes

What's Changed

New Contributors

Full Changelog: v1.9.1...v1.10.0

AnythingLLM 1.9.1

09 Dec 20:56
b96988a

Choose a tag to compare

Notable Changes


What's Changed

New Contributors

Full Changelog: v1.9.0...v1.9.1

AnythingLLM v1.9.0

09 Oct 22:20

Choose a tag to compare

@agent Overhaul & streaming ⚡️️

agent-streaming.mp4

When anythingllm first launched, the word "agent" was not in the vocabulary of the LLM world. Agents are quickly becoming the standard for building AI applications and also the core experience for interacting with LLMs.

For too long, due to the complexity of building agents, spotty tool call support, models that can't even use tools and more nerd stuff, we often had to settle an experience that was not really fun to use since 99% of the time you were just looking at loading spinners waiting for the response.

The new agent experience is now here

Streams tool calls and responses in real time (all providers, all models)
Agents can now real-time download and ingest files from the web (eg: link to PDF, excel, csv). Anything you would use a document can be read in real time by the agent from the web.

Upcoming:

  • Agent real-time API calling without agent flows
  • Agent image understanding
  • Agent system prompt passthrough + user context awareness
  • Realtime file searching cross-platform default skill

Notable Improvements: 🚀

  • All models and providers now support agentic streaming
  • Microsoft Foundry Local integration
  • Ephemerally scrape/download any web-resource via agent or uploader

What's Changed

New Contributors

Full Changelog: v1.8.5...v1.9.0

AnythingLLM v1.8.5 🎉 Mobile support + RAG improvements

15 Aug 00:42

Choose a tag to compare

AnythingLLM v1.8.5 is live

Notable Changes

Mobile support

Now, currently under Experimental features, you can connect the AnythingLLM Mobile App - Android Beta to your instance to seamlessly blend an on-device and off-device experience. Leverage your instance Agent Skills and flows all within a single unified interface!

Chat with documents has been overhauled

upload-documents.mp4

When we first built AnythingLLM, the average context window was 4K - hardly anything to fit a full document. So we decided to always be RAG first. This has its drawbacks since RAG is semantically dependent on asking questions about content in the document. This leads to poor results for "Summarize this document," only to be told by the document, "what are you talking about".

Well, now we have the best of both worlds. Documents are scoped to a workspace thread & user and we will attempt to use the full document text when possible and your model can support it. If you overflow this amount, we can then ask you to embed the document so you can unlock that long-term memory.

context-warning

You can also easily manage and see your context window to remove files that are no longer relevant, but retain the conversation history.
manage-attached-docs

You can also still embed files directly in the workspace file manager too :)

What's Changed

New Contributors

Full Changelog: v1.8.4...v1.8.5

AnythingLLM 1.8.4

16 Jul 18:06

Choose a tag to compare

this is a minor patch update

Notable Changes

  • Workspace & Thread searching now on the sidebar
  • SQL Preflight connection validation (finally)
  • Sticky codeblock headers while scrolling
  • Codeblock max width to prevent long string overflows in the UI

What's Changed

New Contributors

Full Changelog: v1.8.3...v1.8.4

AnythingLLM v1.8.3

09 Jul 19:23
8001d9d

Choose a tag to compare

AnythingLLM 1.8.3 is live!

News

Dont forget to sign up for the AnythingLLM mobile beta!

  • Run small LLMs on device, full on-device RAG, agent tooling, and more. Fully private and on-device
  • device-to-device sync between mobile and desktop clients over local networks

Notable Desktop Changes

Notable updates and fixes

  • Easily publish Agent Flows, Slash Commands, and System Prompts to the AnythingLLM Community Hub
  • Patched YouTube transcript collector since it was failing due to Google API changes

What's Changed

New Contributors

Full Changelog: v1.8.2...v1.8.3

AnythingLLM v1.8.2

10 Jun 23:59
e779dcf

Choose a tag to compare

AnythingLLM v1.8.2 is LIVE!

Other news

We were featured on stage at Microsoft Build (how cool???)

1747677767492

Notable Changes

Model swap in chat

You can now easily swap model or provider in the middle of chatting via cmd/Ctrl + Shift + L to show a tooltip menu to easily swap models without changing screens

Screenshot 2025-06-10 at 4 56 29 PM

System Prompt version tracking

Now, when you edit the system prompt of any workspace we locally store this information so you can easily refer to or restore to previous prompts that might work better for a given model or workspace.

Screenshot 2025-06-10 at 4 56 21 PM

What's Changed

New Contributors

Full Changelog: v1.8.1...v.1.8.2

AnythingLLM v1.8.1

06 May 17:50
051ed15

Choose a tag to compare

What's Changed

New Contributors

Desktop App Changelog

https://docs.anythingllm.com/changelog/v1.8.1

Full Changelog: v1.8.0...v1.8.1