인공지능 썸네일형 리스트형 Neural Ordinary Differential Equations https://arxiv.org/abs/1806.07366 Neural Ordinary Differential EquationsWe introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differenarxiv.org 초록우리는 새로운 계열의 심층 신경망 모델을 소개합니다. 은닉층의 이산적인 순서를 명시하는 대신, 우리는 은닉.. 더보기 F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching https://swivid.github.io/F5-TTS/ F5-TTS-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching Abstract This paper introduces F5-TTS, a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT). Without requiring complex designs sswivid.github.io 초록이 논문은 Diffusion Transformer(DiT)를 이용한 흐름 매칭 기반의 완전 비자기회귀 방식 텍스트-음성 변환 시스템인 F5-TTS를 소.. 더보기 Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries https://arxiv.org/abs/2409.12640 Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure QueriesWe introduce Michelangelo: a minimal, synthetic, and unleaked long-context reasoning evaluation for large language models which is also easy to automatically score. This evaluation is derived via a novel, unifying framework for evaluations over arbitrarilyarxiv.org 초록우리는 Michelang.. 더보기 MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning https://arxiv.org/abs/2405.12130 MoRA: High-Rank Updating for Parameter-Efficient Fine-TuningLow-rank adaptation is a popular parameter-efficient fine-tuning method for large language models. In this paper, we analyze the impact of low-rank updating, as implemented in LoRA. Our findings suggest that the low-rank updating mechanism may limit the abarxiv.org 초록Low-rank adaptation (LoRA)는 대형 언어 모델(.. 더보기 ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning https://arxiv.org/abs/2411.05003 ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-TuningRecently, breakthroughs in video modeling have allowed for controllable camera trajectories in generated videos. However, these methods cannot be directly applied to user-provided videos that are not generated by a video model. In this paper, we present Rearxiv.org .. 더보기 Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT https://arxiv.org/abs/1909.05840 Q-BERT: Hessian Based Ultra Low Precision Quantization of BERTTransformer based architectures have become de-facto models used for a range of Natural Language Processing tasks. In particular, the BERT based models achieved significant accuracy gain for GLUE tasks, CoNLL-03 and SQuAD. However, BERT based models have aarxiv.org 초록Transformer 기반 아키텍처는 다양한 자연어 처리 작업에.. 더보기 MambaStock: Selective state space model for stock prediction 회사 프로젝트가 바빠서 너무 못했다. 이제 나왔으니, 다시 인공지능을 해야지 https://arxiv.org/abs/2402.18959 MambaStock: Selective state space model for stock predictionThe stock market plays a pivotal role in economic development, yet its intricate volatility poses challenges for investors. Consequently, research and accurate predictions of stock price movements are crucial for mitigating risks. Traditional time series marxiv... 더보기 Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions 프로젝트 때문에, 너무 바빠서 잘 못하고 있다.... 오랜만에 하나... https://arxiv.org/abs/2102.12122 Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without ConvolutionsAlthough using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the rec.. 더보기 이전 1 ··· 4 5 6 7 8 9 10 ··· 21 다음