福模

免费开源AI模型下载_本地AI工具资源平台

ALIGN多模态AI模型 - 大规模图像文本对齐

ALIGN Multimodal AI Model - Large-Scale Image-Text Alignment

ALIGN多模态AI模型,利用大规模图像文本对进行对比学习。在多个视觉语言任务中取得了优异成果,支持图像检索和文本生成。

ALIGN multimodal AI model, utilizing large-scale image-text pairs for contrastive learning. Achieves excellent results in multiple vision-language tasks, supporting image retrieval and text generation.

ALIGN多模态图像文本对比学习ALIGNMultimodalImage-TextContrastive Learning
5.6 GB2024-12-15

BigGAN图像生成AI模型 - 大规模类别条件生成

BigGAN Image Generation AI Model - Large-Scale Class-Conditional Generation

BigGAN图像生成AI模型,基于大规模类别条件的生成对抗网络。能够生成高保真度、多样性的图像,为GAN研究树立新基准。

BigGAN image generation AI model, a generative adversarial network based on large-scale class-conditional generation. Capable of generating high-fidelity, diverse images, setting a new benchmark for GAN research.

BigGAN图像生成条件生成对抗网络BigGANImage GenerationConditional GenerationAdversarial Networks
15.6 GB2025-01-18

T5文本到文本转换模型 - 统一NLP任务处理框架

T5 Text-to-Text Transformation Model - Unified Framework for NLP Tasks

T5文本到文本转换模型,将所有NLP任务统一为文本到文本转换的框架。支持翻译、摘要、分类等多种任务,具有高度的任务通用性。

T5 text-to-text transformation model, a framework unifying all NLP tasks as text-to-text transformations. Supports translation, summarization, classification, and multiple other tasks, featuring high task versatility.

T5文本到文本NLP任务统一T5Text-to-TextNLPTask Unification
9.8 GB2025-01-20

MAE掩码自编码器 - 高效视觉表征学习模型

MAE Masked Autoencoders - Efficient Visual Representation Learning Model

MAE掩码自编码器,一种高效视觉表征学习模型。通过掩码策略进行非对称去噪自编码,大幅提升了训练效率,适用于各种视觉识别任务。

MAE masked autoencoders, an efficient visual representation learning model. Utilizes masked strategies for asymmetric denoising autoencoding, significantly improving training efficiency, suitable for various visual recognition tasks.

MAE掩码自编码器视觉表征视觉识别MAEMasked AutoencodersVisual RepresentationVisual Recognition
23.4 GB2025-01-22

Hubert语音表示学习模型 - 无监督语音表征学习

Hubert Speech Representation Learning Model - Unsupervised Speech Representation Learning

HuBERT语音表示学习模型,Facebook提出的无监督语音表征学习模型。通过聚类平滑预测和掩码重建,实现了语音表示的层次化学习。

HuBERT speech representation learning model, an unsupervised speech representation learning model proposed by Facebook. Achieves hierarchical learning of speech representations through cluster-smoothed prediction and masked reconstruction.

HuBERT语音表示无监督学习语音识别HuBERTSpeech RepresentationUnsupervised LearningSpeech Recognition
1.9 GB2025-01-24

LayoutLM文档理解模型 - 图文结合的文档解析

LayoutLM Document Understanding Model - Document Analysis with Text and Layout

LayoutLM文档理解模型,结合文本和布局信息的文档理解模型。通过融合视觉和文本特征,提升了表格解析和文档分类的准确性。

LayoutLM document understanding model, a document understanding model combining text and layout information. Improves the accuracy of table parsing and document classification by fusing visual and textual features.

LayoutLM文档理解文档解析OCRLayoutLMDocument UnderstandingDocument AnalysisOCR
2.7 GB2025-01-26

SimCLR自监督视觉学习模型 - 对比学习表征学习

SimCLR Self-Supervised Visual Learning Model - Contrastive Learning Representation Learning

SimCLR自监督视觉学习模型,通过对比学习进行视觉表征学习。采用增强对比策略,大幅提升了无监督学习的性能。

SimCLR self-supervised visual learning model, performing visual representation learning through contrastive learning. Adopting augmented contrastive strategies, significantly improves the performance of unsupervised learning.

SimCLR自监督学习对比学习视觉表征SimCLRSelf-Supervised LearningContrastive LearningVisual Representation
4.2 GB2025-01-28

DeBERTa语言理解模型 - 增强版BERT模型

DeBERTa Language Understanding Model - Enhanced BERT Model

DeBERTa语言理解模型,对BERT的增强改进版本。通过分解注意力和增强掩码解码,进一步提升了语言理解任务的性能。

DeBERTa language understanding model, an enhanced improved version of BERT. Further improves the performance of language understanding tasks through disentangled attention and enhanced masked decoding.

DeBERTa语言理解NLPBERT改进DeBERTaLanguage UnderstandingNLPBERT Improvement
3.1 GB2025-01-30