Showing 40 open source projects for "deepseek"

View related business solutions
  • Auth0 for AI Agents now in GA Icon
    Auth0 for AI Agents now in GA

    Ready to implement AI with confidence (without sacrificing security)?

    Connect your AI agents to apps and data more securely, give users control over the actions AI agents can perform and the data they can access, and enable human confirmation for critical agent actions.
    Start building today
  • Turn more customers into advocates. Icon
    Turn more customers into advocates.

    Fight skyrocketing paid media costs by turning your customers into a primary vehicle for acquisition, awareness, and activation with Extole.

    The platform's advanced capabilities ensure companies get the most out of their referral programs. Leverage custom events, profiles, and attributes to enable dynamic, audience-specific referral experiences. Use first-party data to tailor customer segment messaging, rewards, and engagement strategies. Use our flexible APIs to build management capabilities and consumer experiences–headlessly or hybrid. We have all the tools you need to build scalable, secure, and high-performing referral programs.
    Learn More
  • 1
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    DeepSeek-R1 is an open-source large language model developed by DeepSeek, designed to excel in complex reasoning tasks across domains such as mathematics, coding, and language. DeepSeek R1 offers unrestricted access for both commercial and academic use. The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens.
    Downloads: 72 This Week
    Last Update:
    See Project
  • 2
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    DeepSeek-V3 is a robust Mixture-of-Experts (MoE) language model developed by DeepSeek, featuring a total of 671 billion parameters, with 37 billion activated per token. It employs Multi-head Latent Attention (MLA) and the DeepSeekMoE architecture to enhance computational efficiency. The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance.
    Downloads: 62 This Week
    Last Update:
    See Project
  • 3
    DeepSeek-OCR

    DeepSeek-OCR

    Contexts Optical Compression

    DeepSeek-OCR is an open-source optical character recognition solution built as part of the broader DeepSeek AI vision-language ecosystem. It is designed to extract text from images, PDFs, and scanned documents, and integrates with multimodal capabilities that understand layout, context, and visual elements beyond raw character recognition.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 4
    DeepSeek VL2

    DeepSeek VL2

    Mixture-of-Experts Vision-Language Models for Advanced Multimodal

    DeepSeek-VL2 is DeepSeek’s vision + language multimodal model—essentially the next-gen successor to their first vision-language models. It combines image and text inputs into a unified embedding / reasoning space so that you can query with text and image jointly (e.g. “What’s going on in this scene?” or “Generate a caption appropriate to context”).
    Downloads: 9 This Week
    Last Update:
    See Project
  • Dominate AI Search Results Icon
    Dominate AI Search Results

    Generative Al is shaping brand discovery. AthenaHQ ensures your brand leads the conversation.

    AthenaHQ is a cutting-edge platform for Generative Engine Optimization (GEO), designed to help brands optimize their visibility and performance across AI-driven search platforms like ChatGPT, Google AI, and more.
    Learn More
  • 5
    DeepSeek Math

    DeepSeek Math

    Pushing the Limits of Mathematical Reasoning in Open Language Models

    ...The repo may also include modules that integrate external computational tools (e.g. a CAS / computer algebra system) or calculator assistance backends to enhance correctness. Because math reasoning is a high bar for LLMs, DeepSeek-Math aims to showcase their model’s ability not just in natural text but in precise formal reasoning.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    DeepSeek V2

    DeepSeek V2

    Strong, Economical, and Efficient Mixture-of-Experts Language Model

    DeepSeek-V2 is the second major iteration of DeepSeek’s foundation language model (LLM) series. This version likely includes architectural improvements, training enhancements, and expanded dataset coverage compared to V1. The repository includes model weight artifacts, evaluation benchmarks across a broad suite (e.g. reasoning, math, multilingual), configuration files, and possibly tokenization / inference scripts.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    DeepSeek-MoE (“DeepSeek MoE”) is the DeepSeek open implementation of a Mixture-of-Experts (MoE) model architecture meant to increase parameter efficiency by activating only a subset of “expert” submodules per input. The repository introduces fine-grained expert segmentation and shared expert isolation to improve specialization while controlling compute cost.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    DeepSeek LLM

    DeepSeek LLM

    DeepSeek LLM: Let there be answers

    The DeepSeek-LLM repository hosts the code, model files, evaluations, and documentation for DeepSeek’s LLM series (notably the 67B Chat variant). Its tagline is “Let there be answers.” The repo includes an “evaluation” folder (with results like math benchmark scores) and code artifacts (e.g. pre-commit config) that support model development and deployment.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    DeepSeek Coder

    DeepSeek Coder

    DeepSeek Coder: Let the Code Write Itself

    DeepSeek-Coder is a series of code-specialized language models designed to generate, complete, and infill code (and mixed code + natural language) with high fluency in both English and Chinese. The models are trained from scratch on a massive corpus (~2 trillion tokens), of which about 87% is code and 13% is natural language. This dataset covers project-level code structure (not just line-by-line snippets), using a large context window (e.g. 16K) and a secondary fill-in-the-blank objective to encourage better contextual completions and infilling. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Skillfully - The future of skills based hiring Icon
    Skillfully - The future of skills based hiring

    Realistic Workplace Simulations that Show Applicant Skills in Action

    Skillfully transforms hiring through AI-powered skill simulations that show you how candidates actually perform before you hire them. Our platform helps companies cut through AI-generated resumes and rehearsed interviews by validating real capabilities in action. Through dynamic job specific simulations and skill-based assessments, companies like Bloomberg and McKinsey have cut screening time by 50% while dramatically improving hire quality.
    Learn More
  • 10
    DeepSeek VL

    DeepSeek VL

    Towards Real-World Vision-Language Understanding

    DeepSeek-VL is DeepSeek’s initial vision-language model that anchors their multimodal stack. It enables understanding and generation across visual and textual modalities—meaning it can process an image + a prompt, answer questions about images, caption, classify, or reason about visuals in context. The model is likely used internally as the visual encoder backbone for agent use cases, to ground perception in downstream tasks (e.g. answering questions about a screenshot).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    DeepSeek-V3.2-Exp

    DeepSeek-V3.2-Exp

    An experimental version of DeepSeek model

    DeepSeek-V3.2-Exp is an experimental release of the DeepSeek model family, intended as a stepping stone toward the next generation architecture. The key innovation in this version is DeepSeek Sparse Attention (DSA), a sparse attention mechanism that aims to optimize training and inference efficiency in long-context settings without degrading output quality.
    Downloads: 41 This Week
    Last Update:
    See Project
  • 12
    DeepSeek Coder V2

    DeepSeek Coder V2

    DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models

    DeepSeek-Coder-V2 is the version-2 iteration of DeepSeek’s code generation models, refining the original DeepSeek-Coder line with improved architecture, training strategies, and benchmark performance. While the V1 models already targeted strong code understanding and generation, V2 appears to push further in both multilingual support and reasoning in code, likely via architectural enhancements or additional training objectives.
    Downloads: 28 This Week
    Last Update:
    See Project
  • 13
    DeepSeek Prover V2

    DeepSeek Prover V2

    Advancing Formal Mathematical Reasoning via Reinforcement Learning

    DeepSeek-Prover-V2 is DeepSeek’s specialized model for formal theorem proving, particularly targeting proof in Lean 4. The repository describes how they use recursive proof decomposition by prompting DeepSeek-V3 to break complex theorems into subgoals, synthesize proof sketches, and then combine them to bootstrap training data. They then fine-tune via reinforcement learning with binary correct/incorrect feedback to integrate informal reasoning with formal proof behavior. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    DeepSeek AIO

    DeepSeek AIO

    Access and use all DeepSeek AI models in one program.

    DeepSeek AIO is a simple program that allows you to interact with all DeepSeek large language models in one place. It supports text-based chats, data analysis, code generation, language translation, and more. The program is designed to make it easy for users to use DeepSeek's AI tools for different purposes without switching between multiple platforms.
    Downloads: 24 This Week
    Last Update:
    See Project
  • 15
    bolt.diy

    bolt.diy

    Prompt, run, edit, & deploy full-stack web applications using any LLM

    bolt.diy is an open-source platform that allows you to easily create, run, edit, and deploy full-stack web applications using a variety of large language models (LLMs). It supports popular models like OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, and Groq, and provides the flexibility to integrate additional models through the Vercel AI SDK. Whether you’re experimenting with pre-built models or developing custom AI-driven applications, bolt.diy offers a smooth and intuitive experience for building AI-powered web apps. Its open-source nature invites community contributions, and it serves as an ideal platform for developers looking to leverage the latest AI technologies.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 16
    Unsloth

    Unsloth

    Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster

    Unsloth is a framework designed to significantly improve the performance of Llama 3.3, DeepSeek-R1, and other reasoning large language models (LLMs). It optimizes these models to run up to 2x faster while using 70% less memory. Unsloth aims to make finetuning large models more efficient, offering users a simple, resource-efficient solution for customizing LLMs with their datasets. It provides a user-friendly experience through free notebooks and the ability to export finetuned models to various formats.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    DualPipe

    DualPipe

    A bidirectional pipeline parallelism algorithm

    DualPipe is a bidirectional pipeline parallelism algorithm open-sourced by DeepSeek, introduced in their DeepSeek-V3 technical framework. The main goal of DualPipe is to maximize overlap between computation and communication phases during distributed training, thus reducing idle GPU time (i.e. “pipeline bubbles”) and improving cluster efficiency. Traditional pipeline parallelism methods (e.g. 1F1B or staggered pipelining) leave gaps because forward and backward phases can’t fully overlap with communication. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Aider

    Aider

    Aider is AI pair programming in your terminal

    ...It maps your entire codebase to provide deep context, making it effective for both small scripts and large, complex projects. Aider supports leading cloud models like Claude 3.7 Sonnet, DeepSeek R1, OpenAI’s o1/o3/GPT-4o, as well as local LLMs for privacy-conscious workflows. Its Git integration ensures every AI-driven change is committed with clear messages, giving developers full transparency and control. Beyond text prompts, Aider accepts images, web pages, and even voice commands to guide code changes. With growing adoption, community support, and active development, Aider is widely regarded as one of the most capable free AI coding assistants available today.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    Profile Data

    Profile Data

    Analyze computation-communication overlap in V3/R1

    profile-data is a repository that publishes profiling traces and metrics from DeepSeek’s training and inference infrastructure (especially during DeepSeek-V3 / R1 experiments). The profiling data targets insights into computation-communication overlap, pipeline scheduling (e.g. DualPipe), and how MoE / EP / parallelism strategies interact in real systems. The repository contains JSON trace files like train.json, prefill.json, decode.json, and associated assets. Users can load them into tools like Chrome tracing to inspect GPU idle times, overlapping operations, and scheduling alignment. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    3FS

    3FS

    A high-performance distributed file system

    ...By handling caching and batching at a system level, 3FS helps reduce overhead when many features or modules must be evaluated per input (e.g. in an LLM agent pipeline). The repository includes example integration with models like DeepSeek-V2 / V3, showing how 3FS can be plugged into pipelines for operations like plugin processing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    GPT4Free

    GPT4Free

    The official gpt4free repository

    gpt4free is an open-source project offering free, unrestricted access to GPT‑4–style language models without requiring an API key. The repository includes scripts and server implementations designed to replicate OpenAI’s GPT‑4 API behavior by leveraging publicly available or self-hosted models. It’s licensed under GPL‑v3.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 22
    GLM-4.6

    GLM-4.6

    Agentic, Reasoning, and Coding (ARC) foundation models

    ...GLM-4.6 also enhances writing quality, producing outputs that better align with human preferences and role-playing scenarios. Benchmark evaluations demonstrate that it not only outperforms GLM-4.5 but also rivals leading global models such as DeepSeek-V3.1-Terminus and Claude Sonnet 4.
    Downloads: 134 This Week
    Last Update:
    See Project
  • 23
    SGLang

    SGLang

    SGLang is a fast serving framework for large language models

    SGLang is a fast serving framework for large language models and vision language models. It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 24
    Anx Reader

    Anx Reader

    Featuring powerful AI capabilities and supporting e-book formats

    ...It supports major formats (EPUB, MOBI, AZW3, FB2, TXT) and integrates powerful AI tools for summarizing and intelligent navigation via OpenAI, Claude, Gemini, and DeepSeek. Anx also syncs progress, notes, and highlights over WebDAV, and offers rich analytics—including heatmaps and exportable reading summaries. UI customization and text-to-speech enhance the reading experience.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 25
    DevoxxGenie

    DevoxxGenie

    DevoxxGenie is a plugin for IntelliJ IDEA that uses local LLM's

    Devoxx Genie is a fully Java-based LLM Code Assistant plugin for IntelliJ IDEA, designed to integrate with local LLM providers such as Ollama, LMStudio, GPT4All, Llama.cpp, and Exo but also cloud-based LLMs such as OpenAI, Anthropic, Mistral, Groq, Gemini, DeepInfra, DeepSeek, OpenRouter and Azure OpenAI.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next