Brief Details: Quantized versions of Llama-4-Scout-17B-16E-Instruct model with various compression levels (Q8_0 to IQ1_M), optimized for different hardware configurations and RAM constraints.
Brief-details: HiDream-I1-Full-nf4 is a 4-bit quantized version of the 17B parameter image generation model, optimized to run on 16GB VRAM while maintaining state-of-the-art quality.
Brief-details: Pusa-V0.5 is an efficient video diffusion model supporting text/image-to-video generation with frame-level noise control, trained in just 0.1k GPU hours on H800 hardware
Brief-details: 8B parameter hybrid reasoning LLM based on Llama architecture. Features extended thinking mode, 30+ language support, and 128k context length. Optimized for STEM and coding.
Brief Details: Drawatoon-v1 by fumeisama - An upcoming AI model (planned for April 2025) focused on drawing/artistic capabilities, currently in development phase.
Brief-details: A powerful multilingual multimodal reranker model capable of processing both text and images across 29+ languages, built on Qwen2-VL-2B with 2.4B parameters and 10K token context.
BRIEF DETAILS: A 14B parameter coding-specialized LLM available in multiple GGUF quantizations (2-8 bit), optimized for code generation and technical tasks with imatrix quantization.
BRIEF DETAILS: A powerful 32B parameter reasoning model optimized for math and coding tasks, achieving comparable performance to DeepSeek-R1 (671B) with AIME24 score of 79.7 and LiveCodeBench score of 63.9.
Brief-details: A 1.5B parameter AI model specialized in reasoning, math, and coding tasks, outperforming larger models while achieving 37.91% accuracy on GPQA-Diamond.
Brief Details: HiDream-I1-Fast is a 17B parameter open-source image generation model achieving SOTA quality, featuring superior prompt following and commercial-friendly licensing.
BRIEF-DETAILS: DeepCoder-1.5B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1, achieving 25.1% on LCB(v5) and 73% on HumanEval+, using GRPO+ training.
Brief-details: Cogito v1-preview-llama-70B is a 70B parameter hybrid reasoning LLM with 128k context, supporting 30+ languages and enhanced STEM/coding capabilities through IDA training.
BRIEF DETAILS: Llama-4-Scout-17B-16E is Meta's multimodal MoE model with 17B active parameters, supporting text and image processing across 12 languages with 10M context.
Brief-details: Advanced 78B parameter multimodal LLM with superior reasoning capabilities, native multimodal pre-training, and extensive vision-language understanding across images, videos, and GUI tasks
Brief-details: Cogito v1 preview (14B params) - Hybrid reasoning LLM with self-reflection capabilities, 128k context, 30+ languages support, optimized for STEM/coding
Brief Details: HiDream-I1-Dev is a 17B-parameter open-source image generation model achieving SOTA results, featuring superior quality and prompt following with MIT license compatibility.
BRIEF DETAILS: A 3B parameter hybrid reasoning LLM based on Llama architecture, featuring 128k context, 30+ language support, and innovative self-reflection capabilities.
Brief-details: Cogito v1 preview (32B params) - Hybrid reasoning LLM with self-reflection capabilities, 128k context, supports 30+ languages and tool calling
BRIEF DETAILS: Efficient 16B parameter MoE vision-language model with only 2.8B active parameters, featuring 128K context window and strong performance in multimodal tasks, OCR, and agent capabilities.
BRIEF DETAILS: A powerful 253B parameter LLM derived from Llama-3.1, optimized through Neural Architecture Search for enhanced reasoning and efficiency. Features 128K context length and commercial-ready capabilities.
BRIEF DETAILS: Efficient 16B parameter MoE vision-language model with only 2.8B active parameters, featuring 128K context window and specialized for mathematical reasoning and long-chain thinking.