BRIEF DETAILS: TinyLLaVA is a 1.41B parameter multimodal model that efficiently handles image-text tasks, achieving competitive performance against larger 7B models with significantly fewer parameters.
BRIEF DETAILS: UltraRM-13b is a SOTA reward model built on LLaMA2-13B, achieving 92.30% win rate vs text-davinci-003 on AlpacaEval benchmark.
Brief-details: A 34B parameter Yi-based model fine-tuned on light novels and roleplay data, optimized for creative writing and character interactions using GGUF format
TinyLlama-1.1B-Chat: Compact 1.1B parameter chat model based on Llama 2 architecture, trained on 3T tokens. Optimized for efficient deployment with Apache 2.0 license. Supports text generation tasks with minimal computational requirements.
Brief Details: An anime-styled variant of SSD-1B, merged with NekorayXL and fine-tuned through distillation. Supports text-to-image generation with specialized anime aesthetics.
Brief-details: A 1B parameter code generation model fine-tuned on the evol-codealpaca dataset, achieving 39% pass@1 on HumanEval and 31.74% on MBPP.
Brief Details: A 13B parameter GPTQ-quantized LLaMA2-based model merging Pygmalion-2 and MythoMax, optimized for roleplay and chat with multiple quantization options
Brief Details: Japanese vision-language model for image captioning and VQA tasks. Built on InstructBLIP architecture with Japanese StableLM, trained on CC12M and COCO datasets.
Brief-details: WizardMath-7B-V1.0 is a specialized mathematical reasoning LLM achieving 54.9% on GSM8k and 10.7% on MATH benchmarks, built on Llama 2 architecture.
Brief Details: 13B parameter LLaMA2-based model optimized for 8K context, GGML quantized for CPU/GPU inference, trained on Orca chat dataset for instruction following.
BRIEF DETAILS: Text-to-image diffusion model with built-in VAE, optimized for realistic image generation with specific focus on quality control and anatomical accuracy.
Brief-details: Multi-stage blend model combining 15+ AI models for stable diffusion, optimized for high-quality anime-style image generation with advanced weight calibration
Brief Details: Korean language AI model with 13.1B parameters, fine-tuned on KoAlpaca Dataset v1.1b, optimized for text generation and multilingual tasks
Brief-details: BART-based conversation summarization model fine-tuned on SAMSum dataset, achieving 54.87 ROUGE-1 score. Popular for dialogue summarization tasks.
BRIEF DETAILS: A specialized anime-style text-to-image diffusion model focused on producing high-quality pastel artwork, featuring unique stylization and detailed character generation capabilities.
Brief Details: FLAN-T5 XXL sharded FP16 model - A powerful text-to-text transformer supporting 50+ languages, optimized for NVIDIA A10G deployment with quantization for efficient inference.
Brief-details: RoBERTa-based ChatGPT detection model trained on HC3 dataset. Achieves text classification for identifying AI-generated content. 53 likes, 4.7K+ downloads.
Brief Details: ImageReward - First general-purpose text-to-image human preference reward model trained on 137k expert comparisons. Outperforms CLIP, Aesthetic, and BLIP.
BRIEF DETAILS: A 2.7B parameter dialogue model fine-tuned from GPT-Neo, designed for conversational AI with advanced text generation capabilities. Features customizable character personas and dialogue formatting.
Brief-details: A specialized text-to-image diffusion model trained on colorized historical photos (1880s-1980s), creating vintage-style images with rich tones using "timeless style" token
Brief-details: FFXIV-Style is a Stable Diffusion model trained on Final Fantasy XIV trailer imagery, specializing in generating game-style character portraits, landscapes, and ornate armor designs.