- New York
-
17:02
(UTC -04:00) - https://robmsmt.github.io/
- @robmsmt
- in/robmsmt
Stars
Tools for merging pretrained large language models.
A graph node engine and editor written in Javascript similar to PD or UDK Blueprints, comes with its own editor in HTML5 Canvas2D. The engine can run client side or server side using Node. It allow…
Implementation of E2-TTS, "Embarrassingly Easy Fully Non-Autoregressive Zero-Shot TTS", in Pytorch
Fast and memory-efficient exact attention
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
AudioLDM: Generate speech, sound effects, music and beyond, with text.
Noise supression using deep filtering
QLoRA: Efficient Finetuning of Quantized LLMs
Official implementation of Half-Quadratic Quantization (HQQ)
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (2024)
This is a port of Mistral-7B model in JAX
Port of Mistral 7B in Keras and JAX
Agent driven automation starting with the web. Discord: https://discord.gg/wgNfmFuqJF
Official repository of Evolutionary Optimization of Model Merging Recipes
Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
Latency and Memory Analysis of Transformer Models for Training and Inference
Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.
A high-throughput and memory-efficient inference and serving engine for LLMs
A chrome extension that corrects typos & mispellings on-demand
Full stack, modern web application generator. Using FastAPI, PostgreSQL as database, Nuxt3, Docker, automatic HTTPS and more.
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable…
A timeline of the latest AI models for audio generation, starting in 2023!
A playbook for systematically maximizing the performance of deep learning models.