
Dogukan Uraz T.
RE/RS, Self-Improving AI, Autonomous Agents, Scaling LawsI'm on a quest to explore recursively self-improving AI, autonomous AI agents for science, program synthesis, neural scaling laws. I see finding new ways to scale multi-step reasoning and improvement as a stepping stone to super-reasoning. Although there are many technical topics that interest me the most are scaling autonomous agents for science, meta-learning, meta-optimization, multi-step reasoning, program synthesis and compute-optimal scaling. Experimenting with new ideas and building prototypes is my favorite way to learn and open-source is my favorite way to share.
Recent Posts & Works
April 2, 2025
Building a Scalable Document Processing System with Gemini 2.0 and Elasticsearch
January 22, 2025
Revisiting Superintelligence Control Problem with Newer Extensions
January 2, 2025
New Year, New Blog
Archive, 2024
Tensor Decompositions
Published on Dec 31, 2024
Mixture-of-Experts Process Groups Initialization
Published on Dec 31, 2024
Real-Time Text and Voice-Based Retrieval Augmented Generation Pipeline with Gemini 2.0 for Long Documents
Published on December 25, 2024
OpenAI's Code Execution Runtime & Replicating Sandboxing Infrastructure
Published on July 2, 2024
Linen Modules for Simple Long Convolutions Sequence Modeling with Log-Linear Time Complexity
Published on March 20, 2024
[Let's Review] Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Published on March 7, 2024
[Let's Review] Gradient Ascent + Jax Implementation
Published on March 11, 2024
Tri-RMSNorm: Yielding Speed-up for Training Stabilization
Published on March 14, 2024
Enhancing Performance with C/C++ Code Execution for Langchain Agents
Published on July 18, 2024
Utilizing Airflow for Planning, Scheduling, Executing and Scaling AI Agentic Workloads
Published on May 6, 2024