본문 바로가기

전체 글

pd 저장용 보호되어 있는 글입니다. 더보기
Generative Image Dynamics https://generative-dynamics.github.io/ Generative Image DynamicsWe present an approach to modeling an image-space prior on scene motion. Our prior is learned from a collection of motion trajectories extracted from real video sequences depicting natural, oscillatory dynamics such as trees, flowers, candles, and clothesgenerative-dynamics.github.io 요약우리는 장면 운동에 대한 이미지 공간 사전 모델링 접근 방식을 제시합니다. 우리의 사.. 더보기
Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild https://arxiv.org/abs/2401.13627 Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the WildWe introduce SUPIR (Scaling-UP Image Restoration), a groundbreaking image restoration method that harnesses generative prior and the power of model scaling up. Leveraging multi-modal techniques and advanced generative prior, SUPIR marks a significant advanarxiv.org.. 더보기
LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 LoRA: Low-Rank Adaptation of Large Language ModelsAn important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes learxiv.org 요약자연어 처리의 중요한 패러다임은 일반 도메인 데이터에 대한 대규모 사전 학습과 특정 .. 더보기
Luma-ai https://lumalabs.ai/dream-machine/creations Luma Dream MachineDream Machine is an AI model that makes high quality, realistic videos fast from text and images from Luma AIlumalabs.ai 더보기
Parameter-Efficient Transfer Learning for NLP https://arxiv.org/abs/1902.00751 Parameter-Efficient Transfer Learning for NLPFine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transferarxiv.org초록대규모 사전 학습된 모델을 미세 조정(fine-tuning)하는 것은 자연어 처리(NLP)에서 효.. 더보기
LLM Critics Help Catch LLM BugsNat https://openai.com/index/finding-gpt4s-mistakes-with-gpt-4/ -----------------------------------------------------------------------------------------------------------------------------------------------------------------기술 논문이고 솔직히 별로 추천하지 않음-----------------------------------------------------------------------------------------------------------------------------------------------------------.. 더보기
구글 콜랩 가격 T4 = 17.15 L4 = 4.82 a100 = 11.77 더보기