전체 글 썸네일형 리스트형 Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold https://arxiv.org/abs/2305.10973 Drag Your GAN: Interactive Point-based Manipulation on the Generative Image ManifoldSynthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects. Existing approaches gain controllability of generative adversarial networks (GANs)arxiv.org 요약사용자 요구를 충족하는 시.. 더보기 You Only Cache Once: Decoder-Decoder Architectures for Language Models https://arxiv.org/abs/2405.05254 You Only Cache Once: Decoder-Decoder Architectures for Language ModelsWe introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once. It consists of two components, i.e., a cross-decoder stacked upon a self-decoder. The self-decoder efficiently encodes global key-value (Karxiv.org 요약우리는 대형 언어 모델을 위한 디코더-디코더 아키.. 더보기 Paints Undo https://huggingface.co/spaces/MohamedRashad/PaintsUndo PaintsUndo - a Hugging Face Space by MohamedRashadRunning on Zerohuggingface.co https://github.com/lllyasviel/Paints-UNDO GitHub - lllyasviel/Paints-UNDO: Understand Human Behavior to Align True NeedsUnderstand Human Behavior to Align True Needs. Contribute to lllyasviel/Paints-UNDO development by creating an account on GitHub.github.com htt.. 더보기 BitNet: Scaling 1-bit Transformers for Large Language Models https://arxiv.org/abs/2310.11453 BitNet: Scaling 1-bit Transformers for Large Language ModelsThe increasing size of large language models has posed challenges for deployment and raised concerns about environmental impact due to high energy consumption. In this work, we introduce BitNet, a scalable and stable 1-bit Transformer architecture designedarxiv.org 초록대형 언어 모델의 크기가 커짐에 따라 배포에 어려움이 생기고 높은 .. 더보기 Q-Sparse: All Large Language Models can be Fully Sparsely-Activated https://arxiv.org/abs/2407.10969 Q-Sparse: All Large Language Models can be Fully Sparsely-ActivatedWe introduce, Q-Sparse, a simple yet effective approach to training sparsely-activated large language models (LLMs). Q-Sparse enables full sparsity of activations in LLMs which can bring significant efficiency gains in inference. This is achieved by applyiarxiv.org 개요우리는 Q-Sparse라는 간단하면서도 효과적인 방법을.. 더보기 JEST : Data curation via joint example selection further accelerates multimodal learning https://arxiv.org/abs/2406.17711 Data curation via joint example selection further accelerates multimodal learningData curation is an essential component of large-scale pretraining. In this work, we demonstrate that jointly selecting batches of data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the depenarxiv.org 초록데이터 큐레이션은 대규모 사전.. 더보기 Runway3 뮤비 공유 https://www.youtube.com/watch?v=ImpUC9WWetw 더보기 CharacterGen: Efficient 3D Character Generation from Single Imageswith Multi-View Pose Calibration https://charactergen.github.io/ CharacterGen: Efficient 3D Character Generation from Single ImagesIn this paper, we present CharacterGen, a framework developed to efficiently generate 3D characters. CharacterGen introduces a streamlined generation pipeline along with an image-conditioned multi-view diffusion model. This model effectively calibrates inpcharactergen.github.io 디지털 콘텐츠 제작 분야에서 단일 이미.. 더보기 이전 1 ··· 33 34 35 36 37 38 39 ··· 62 다음