default search action
Yi-Hsuan Yang
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c153]Chih-Pin Tan, Shuen-Huei Guan, Yi-Hsuan Yang:
PiCoGen: Generate Piano Covers with a Two-stage Approach. ICMR 2024: 1180-1184 - [i87]Yu-Hua Chen, Woosung Choi, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Kin Wai Cheuk, Yuki Mitsufuji, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Improving Unsupervised Clean-to-Rendered Guitar Tone Transformation Using GANs and Integrated Unaligned Clean Data. CoRR abs/2406.15751 (2024) - [i86]Yu-Hua Chen, Yen-Tung Yeh, Yuan-Chiao Cheng, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Towards zero-shot amplifier modeling: One-to-many amplifier modeling via tone embedding control. CoRR abs/2407.10646 (2024) - [i85]Yun-Han Lan, Wen-Yi Hsiao, Hao-Chung Cheng, Yi-Hsuan Yang:
MusiConGen: Rhythm and Chord Control for Transformer-Based Text-to-Music Generation. CoRR abs/2407.15060 (2024) - [i84]Fang-Duo Tsai, Shih-Lun Wu, Haven Kim, Bo-Yu Chen, Hao-Chung Cheng, Yi-Hsuan Yang:
Audio Prompt Adapter: Unleashing Music Editing Abilities for Text-to-Music with Lightweight Finetuning. CoRR abs/2407.16564 (2024) - [i83]Ying-Shuo Lee, Yueh-Po Peng, Jui-Te Wu, Ming Cheng, Li Su, Yi-Hsuan Yang:
Distortion Recovery: A Two-Stage Method for Guitar Effect Removal. CoRR abs/2407.16639 (2024) - [i82]Jingyue Huang, Yi-Hsuan Yang:
Emotion-Driven Melody Harmonization via Melodic Variation and Functional Representation. CoRR abs/2407.20176 (2024) - [i81]Chih-Pin Tan, Shuen-Huei Guan, Yi-Hsuan Yang:
PiCoGen: Generate Piano Covers with a Two-stage Approach. CoRR abs/2407.20883 (2024) - [i80]Jingyue Huang, Ke Chen, Yi-Hsuan Yang:
Emotion-driven Piano Music Generation via Two-stage Disentanglement and Functional Representation. CoRR abs/2407.20955 (2024) - [i79]Chih-Pin Tan, Hsin Ai, Yi-Hsin Chang, Shuen-Huei Guan, Yi-Hsuan Yang:
PiCoGen2: Piano cover generation with transfer learning approach and weakly aligned data. CoRR abs/2408.01551 (2024) - [i78]Yen-Tung Yeh, Wen-Yi Hsiao, Yi-Hsuan Yang:
Hyper Recurrent Neural Network: Condition Mechanisms for Black-box Audio Effect Modeling. CoRR abs/2408.04829 (2024) - [i77]Yen-Tung Yeh, Wen-Yi Hsiao, Yi-Hsuan Yang:
PyNeuralFx: A Python Package for Neural Audio Effect Modeling. CoRR abs/2408.06053 (2024) - [i76]Yen-Tung Yeh, Yu-Hua Chen, Yuan-Chiao Cheng, Jui-Te Wu, Jun-Jie Fu, Yi-Fan Yeh, Yi-Hsuan Yang:
DDSP Guitar Amp: Interpretable Guitar Amplifier Modeling. CoRR abs/2408.11405 (2024) - [i75]Dinh-Viet-Toan Le, Yi-Hsuan Yang:
METEOR: Melody-aware Texture-controllable Symbolic Orchestral Music Generation. CoRR abs/2409.11753 (2024) - [i74]Yu-Hua Chen, Yuan-Chiao Cheng, Yen-Tung Yeh, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Demo of Zero-Shot Guitar Amplifier Modelling: Enhancing Modeling with Hyper Neural Networks. CoRR abs/2410.04702 (2024) - 2023
- [j42]Shih-Lun Wu, Yi-Hsuan Yang:
MuseMorphose: Full-Song and Fine-Grained Piano Music Style Transfer With One Transformer VAE. IEEE ACM Trans. Audio Speech Lang. Process. 31: 1953-1967 (2023) - [j41]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music. IEEE ACM Trans. Audio Speech Lang. Process. 31: 2824-2835 (2023) - [j40]Yi-Jen Shih, Shih-Lun Wu, Frank Zalkow, Meinard Müller, Yi-Hsuan Yang:
Theme Transformer: Symbolic Music Generation With Theme-Conditioned Transformer. IEEE Trans. Multim. 25: 3495-3508 (2023) - [c152]Shih-Lun Wu, Yi-Hsuan Yang:
Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach. ICASSP 2023: 1-5 - [i73]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Local Periodicity-Based Beat Tracking for Expressive Classical Piano Music. CoRR abs/2308.10355 (2023) - 2022
- [j39]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
An Analysis Method for Metric-Level Switching in Beat Tracking. IEEE Signal Process. Lett. 29: 2153-2157 (2022) - [c151]Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang:
Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks. ICASSP 2022: 466-470 - [c150]Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Towards Automatic Transcription of Polyphonic Electric Guitar Music: A New Dataset and a Multi-Loss Transformer Model. ICASSP 2022: 786-790 - [c149]Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang:
KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE Using Mel-Spectrograms. ICASSP 2022: 956-960 - [c148]Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson, Scott Bruzenak, Yi-Wen Liu, Yi-Hsuan Yang:
DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation. ISMIR 2022: 76-83 - [c147]Yen-Tung Yeh, Yi-Hsuan Yang, Bo-Yu Chen:
Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation. ISMIR 2022: 132-140 - [c146]Yueh-Kao Wu, Ching-Yu Chiu, Yi-Hsuan Yang:
Jukedrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VAE. ISMIR 2022: 193-200 - [c145]Chih-Pin Tan, Alvin W. Y. Su, Yi-Hsuan Yang:
Melody Infilling with User-Provided Structural Context. ISMIR 2022: 834-841 - [i72]Yu-Hua Chen, Wen-Yi Hsiao, Tsu-Kuang Hsieh, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
towards automatic transcription of polyphonic electric guitar music: a new dataset and a multi-loss transformer model. CoRR abs/2202.09907 (2022) - [i71]Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson, Scott Bruzenak, Yi-Wen Liu, Yi-Hsuan Yang:
DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A Comprehensive Evaluation. CoRR abs/2208.04756 (2022) - [i70]Yen-Tung Yeh, Bo-Yu Chen, Yi-Hsuan Yang:
Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation. CoRR abs/2209.01751 (2022) - [i69]Shih-Lun Wu, Yi-Hsuan Yang:
Compose & Embellish: Well-Structured Piano Performance Generation via A Two-Stage Approach. CoRR abs/2209.08212 (2022) - [i68]Chih-Pin Tan, Alvin W. Y. Su, Yi-Hsuan Yang:
Melody Infilling with User-Provided Structural Context. CoRR abs/2210.02829 (2022) - [i67]Yueh-Kao Wu, Ching-Yu Chiu, Yi-Hsuan Yang:
JukeDrummer: Conditional Beat-aware Audio-domain Drum Accompaniment Generation via Transformer VQ-VAE. CoRR abs/2210.06007 (2022) - [i66]Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang:
An Analysis Method for Metric-Level Switching in Beat Tracking. CoRR abs/2210.06817 (2022) - 2021
- [j38]Ching-Yu Chiu, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Drum-Aware Ensemble Architecture for Improved Joint Musical Beat and Downbeat Tracking. IEEE Signal Process. Lett. 28: 1100-1104 (2021) - [j37]Juan Sebastián Gómez Cañón, Estefanía Cano, Tuomas Eerola, Perfecto Herrera, Xiao Hu, Yi-Hsuan Yang, Emilia Gómez:
Music Emotion Recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Process. Mag. 38(6): 106-114 (2021) - [j36]Eva Zangerle, Chih-Ming Chen, Ming-Feng Tsai, Yi-Hsuan Yang:
Leveraging Affective Hashtags for Ranking Music Recommendations. IEEE Trans. Affect. Comput. 12(1): 78-91 (2021) - [c144]Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs. AAAI 2021: 178-186 - [c143]Fu-Rong Yang, Yin-Ping Cho, Yi-Hsuan Yang, Da-Yi Wu, Shan-Hung Wu, Yi-Wen Liu:
Mandarin Singing Voice Synthesis with a Phonology-based Duration Model. APSIPA ASC 2021: 1975-1981 - [c142]Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking. EUSIPCO 2021: 391-395 - [c141]Antoine Liutkus, Ondrej Cífka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gaël Richard:
Relative Positional Encoding for Transformers with Linear Complexity. ICML 2021: 7067-7079 - [c140]Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang:
Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding. ISMIR 2021: 97-104 - [c139]Juan Sebastián Gómez Cañón, Estefanía Cano, Yi-Hsuan Yang, Perfecto Herrera, Emilia Gómez:
Let's agree to disagree: Consensus Entropy Active Learning for Personalized Music Emotion Recognition. ISMIR 2021: 237-245 - [c138]Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang:
A Benchmarking Initiative for Audio-domain Music Generation using the FreeSound Loop Dataset. ISMIR 2021: 310-317 - [c137]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. ISMIR 2021: 318-325 - [c136]Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. ISMIR 2021: 610-617 - [c135]Yi-Hsuan Yang:
Automatic Music Composition with Transformers. MMArt&ACM@ICMR 2021: 1 - [c134]Taejun Kim, Yi-Hsuan Yang, Juhan Nam:
Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization. NIME 2021 - [i65]Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs. CoRR abs/2101.02402 (2021) - [i64]Shih-Lun Wu, Yi-Hsuan Yang:
MuseMorphose: Full-Song and Fine-Grained Music Style Transfer with Just One Transformer VAE. CoRR abs/2105.04090 (2021) - [i63]Antoine Liutkus, Ondrej Cífka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gaël Richard:
Relative Positional Encoding for Transformers with Linear Complexity. CoRR abs/2105.08399 (2021) - [i62]Ching-Yu Chiu, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Drum-Aware Ensemble Architecture for Improved Joint Musical Beat and Downbeat Tracking. CoRR abs/2106.08685 (2021) - [i61]Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang:
Source Separation-based Data Augmentation for Improved Joint Beat and Downbeat Tracking. CoRR abs/2106.08703 (2021) - [i60]Yi-Hui Chou, I-Chun Chen, Chin-Jui Chang, Joann Ching, Yi-Hsuan Yang:
MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding. CoRR abs/2107.05223 (2021) - [i59]Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. CoRR abs/2107.14653 (2021) - [i58]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. CoRR abs/2108.01374 (2021) - [i57]Tun-Min Hung, Bo-Yu Chen, Yen-Tung Yeh, Yi-Hsuan Yang:
A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset. CoRR abs/2108.01576 (2021) - [i56]Chin-Jui Chang, Chun-Yi Lee, Yi-Hsuan Yang:
Variable-Length Music Score Infilling via XLNet and Musically Specialized Positional Encoding. CoRR abs/2108.05064 (2021) - [i55]Chien-Feng Liao, Jen-Yu Liu, Yi-Hsuan Yang:
KaraSinger: Score-Free Singing Voice Synthesis with VQ-VAE using Mel-spectrograms. CoRR abs/2110.04005 (2021) - [i54]Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang:
Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks. CoRR abs/2110.06525 (2021) - [i53]Wei-Han Hsu, Bo-Yu Chen, Yi-Hsuan Yang:
Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features. CoRR abs/2110.08862 (2021) - [i52]Joann Ching, Yi-Hsuan Yang:
Learning To Generate Piano Music With Sustain Pedals. CoRR abs/2111.01216 (2021) - [i51]Yi-Jen Shih, Shih-Lun Wu, Frank Zalkow, Meinard Müller, Yi-Hsuan Yang:
Theme Transformer: Symbolic Music Generation with Theme-Conditioned Transformer. CoRR abs/2111.04093 (2021) - [i50]Chih-Pin Tan, Chin-Jui Chang, Alvin W. Y. Su, Yi-Hsuan Yang:
Music Score Expansion with Variable-Length Infilling. CoRR abs/2111.06046 (2021) - 2020
- [j35]Szu-Yu Chou, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Fast Tensor Factorization for Large-Scale Context-Aware Recommendation from Implicit Feedback. IEEE Trans. Big Data 6(1): 201-208 (2020) - [j34]Zhe-Cheng Fan, Tak-Shing T. Chan, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Backpropagation With $N$ -D Vector-Valued Neurons Using Arbitrary Bilinear Products. IEEE Trans. Neural Networks Learn. Syst. 31(7): 2638-2652 (2020) - [c133]Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang:
Addressing The Confounds Of Accompaniments In Singer Identification. ICASSP 2020: 1-5 - [c132]Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang:
Speech-To-Singing Conversion in an Encoder-Decoder Framework. ICASSP 2020: 261-265 - [c131]Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier:
A Comparative Study of Western and Chinese Classical Music Based on Soundscape Models. ICASSP 2020: 521-525 - [c130]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Score and Lyrics-Free Singing Voice Generation. ICCC 2020: 196-203 - [c129]Da-Yi Wu, Yi-Hsuan Yang:
Speech-to-Singing Conversion Based on Boundary Equilibrium GAN. INTERSPEECH 2020: 1316-1320 - [c128]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. INTERSPEECH 2020: 1997-2001 - [c127]Shih-Lun Wu, Yi-Hsuan Yang:
The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures. ISMIR 2020: 142-149 - [c126]António Ramires, Frederic Font, Dmitry Bogdanov, Jordan B. L. Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Wei-Han Hsu, Xavier Serra:
The Freesound Loop Dataset and Annotation Tool. ISMIR 2020: 287-294 - [c125]Bo-Yu Chen, Jordan B. L. Smith, Yi-Hsuan Yang:
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops. ISMIR 2020: 424-431 - [c124]Yu-Hua Chen, Yu-Siang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang:
Automatic Composition of Guitar Tabs by Transformers and Groove Modeling. ISMIR 2020: 756-763 - [c123]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. ISMIR 2020: 764-770 - [c122]Yu-Siang Huang, Yi-Hsuan Yang:
Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions. ACM Multimedia 2020: 1180-1188 - [c121]Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su:
Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation. MMSP 2020: 1-6 - [i49]Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yian Chen, Terence Leong, Yi-Hsuan Yang:
Automatic Melody Harmonization with Triad Chords: A Comparative Study. CoRR abs/2001.02360 (2020) - [i48]Yu-Siang Huang, Yi-Hsuan Yang:
Pop Music Transformer: Generating Music with Rhythm and Harmony. CoRR abs/2002.00212 (2020) - [i47]Jayneel Parekh, Preeti Rao, Yi-Hsuan Yang:
Speech-to-Singing Conversion in an Encoder-Decoder Framework. CoRR abs/2002.06595 (2020) - [i46]Tsung-Han Hsieh, Kai-Hsiang Cheng, Zhe-Cheng Fan, Yu-Ching Yang, Yi-Hsuan Yang:
Addressing the confounds of accompaniments in singer identification. CoRR abs/2002.06817 (2020) - [i45]Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier:
A Comparative Study of Western and Chinese Classical Music based on Soundscape Models. CoRR abs/2002.09021 (2020) - [i44]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization. CoRR abs/2005.08526 (2020) - [i43]Da-Yi Wu, Yi-Hsuan Yang:
Speech-to-Singing Conversion based on Boundary Equilibrium GAN. CoRR abs/2005.13835 (2020) - [i42]Shih-Lun Wu, Yi-Hsuan Yang:
The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures. CoRR abs/2008.01307 (2020) - [i41]Yu-Hua Chen, Yu-Hsiang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang:
Automatic Composition of Guitar Tabs by Transformers and Groove Modeling. CoRR abs/2008.01431 (2020) - [i40]Bo-Yu Chen, Jordan B. L. Smith, Yi-Hsuan Yang:
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops. CoRR abs/2008.02011 (2020) - [i39]Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su:
Mixing-Specific Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation. CoRR abs/2008.02480 (2020) - [i38]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. CoRR abs/2008.10267 (2020) - [i37]António Ramires, Frederic Font, Dmitry Bogdanov, Jordan B. L. Smith, Yi-Hsuan Yang, Joann Ching, Bo-Yu Chen, Yueh-Kao Wu, Wei-Han Hsu, Xavier Serra:
The Freesound Loop Dataset and Annotation Tool. CoRR abs/2008.11507 (2020)
2010 – 2019
- 2019
- [j33]Juhan Nam, Keunwoo Choi, Jongpil Lee, Szu-Yu Chou, Yi-Hsuan Yang:
Deep Learning for Audio-Based Music Classification and Tagging: Teaching Computers to Distinguish Rock from Bach. IEEE Signal Process. Mag. 36(1): 41-51 (2019) - [j32]Ting-Wei Su, Yuan-Ping Chen, Li Su, Yi-Hsuan Yang:
TENT: Technique-Embedded Note Tracking for Real-World Guitar Solo Recordings. Trans. Int. Soc. Music. Inf. Retr. 2(1): 15-28 (2019) - [j31]Jen-Yu Liu, Yi-Hsuan Yang, Shyh-Kang Jeng:
Weakly-Supervised Visual Instrument-Playing Action Detection in Videos. IEEE Trans. Multim. 21(4): 887-901 (2019) - [c120]Bryan Wang, Yi-Hsuan Yang:
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network. AAAI 2019: 1174-1181 - [c119]Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques. APSIPA 2019: 339-346 - [c118]Frédéric Tamagnan, Yi-Hsuan Yang:
Drum Fills Detection and Generation. CMMR 2019: 91-99 - [c117]Szu-Yu Chou, Kai-Hsiang Cheng, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Learning to Match Transient Sound Events Using Attentional Similarity for Few-shot Sound Recognition. ICASSP 2019: 26-30 - [c116]Tsung-Han Hsieh, Li Su, Yi-Hsuan Yang:
A Streamlined Encoder/decoder Architecture for Melody Extraction. ICASSP 2019: 156-160 - [c115]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Multitask Learning for Frame-level Instrument Recognition. ICASSP 2019: 381-385 - [c114]Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang:
Musical Composition Style Transfer via Disentangled Timbre Representations. IJCAI 2019: 4697-4703 - [c113]Jen-Yu Liu, Yi-Hsuan Yang:
Dilated Convolution with Dilated GRU for Music Source Separation. IJCAI 2019: 4718-4724 - [c112]Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang:
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation. IJCAI 2019: 6506-6508 - [c111]Zhe-Cheng Fan, Tak-Shing Chan, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Deep Cyclic Group Networks. IJCNN 2019: 1-8 - [c110]Eva Zangerle, Michael Vötter, Ramona Huber, Yi-Hsuan Yang:
Hit Song Prediction: Leveraging Low- and High-Level Audio Features. ISMIR 2019: 319-326 - [c109]Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
A Minimal Template for Interactive Web-based Demonstrations of Musical Machine Learning. IUI Workshops 2019 - [c108]Hsiao-Tzu Hung, Yu-Hua Chen, Maximilian Mayerl, Michael Vötter, Eva Zangerle, Yi-Hsuan Yang:
MediaEval 2019 Emotion and Theme Recognition task: A VQ-VAE Based Approach. MediaEval 2019 - [c107]Maximilian Mayerl, Michael Vötter, Hsiao-Tzu Hung, Bo-Yu Chen, Yi-Hsuan Yang, Eva Zangerle:
Recognizing Song Mood and Theme Using Convolutional Recurrent Neural Networks. MediaEval 2019 - [c106]Kai-Hsiang Cheng, Szu-Yu Chou, Yi-Hsuan Yang:
Multi-label Few-shot Learning for Sound Event Recognition. MMSP 2019: 1-5 - [c105]Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang:
Collaborative Similarity Embedding for Recommender Systems. WWW 2019: 2637-2643 - [i36]Hao-Wen Dong, Yi-Hsuan Yang:
Towards a Deeper Understanding of Adversarial Losses. CoRR abs/1901.08753 (2019) - [i35]Vibert Thio, Hao-Min Liu, Yin-Cheng Yeh, Yi-Hsuan Yang:
A Minimal Template for Interactive Web-based Demonstrations of Musical Machine Learning. CoRR abs/1902.03722 (2019) - [i34]Chih-Ming Chen, Chuan-Ju Wang, Ming-Feng Tsai, Yi-Hsuan Yang:
Collaborative Similarity Embedding for Recommender Systems. CoRR abs/1902.06188 (2019) - [i33]Yu-Hua Chen, Bryan Wang, Yi-Hsuan Yang:
Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation. CoRR abs/1905.11689 (2019) - [i32]Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang:
Musical Composition Style Transfer via Disentangled Timbre Representations. CoRR abs/1905.13567 (2019) - [i31]Jen-Yu Liu, Yi-Hsuan Yang:
Dilated Convolution with Dilated GRU for Music Source Separation. CoRR abs/1906.01203 (2019) - [i30]Hsiao-Tzu Hung, Chung-Yang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques. CoRR abs/1908.09484 (2019) - [i29]Jen-Yu Liu, Yu-Hua Chen, Yin-Cheng Yeh, Yi-Hsuan Yang:
Score and Lyrics-Free Singing Voice Generation. CoRR abs/1912.11747 (2019) - [i28]Meinard Müller, Emilia Gómez, Yi-Hsuan Yang:
Computational Methods for Melody and Voice Processing in Music Recordings (Dagstuhl Seminar 19052). Dagstuhl Reports 9(1): 125-177 (2019) - 2018
- [j30]Yu-Hao Chin, Jia-Ching Wang, Ju-Chiang Wang, Yi-Hsuan Yang:
Predicting the Probability Density Function of Music Emotion Using Emotion Space Mapping. IEEE Trans. Affect. Comput. 9(4): 541-549 (2018) - [j29]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Pop Music Highlighter: Marking the Emotion Keypoints. Trans. Int. Soc. Music. Inf. Retr. 1(1): 68-78 (2018) - [j28]Jen-Chun Lin, Wen-Li Wei, Tyng-Luh Liu, Yi-Hsuan Yang, Hsin-Min Wang, Hsiao-Rong Tyan, Hong-Yuan Mark Liao:
Coherent Deep-Net Fusion To Classify Shots In Concert Videos. IEEE Trans. Multim. 20(11): 3123-3136 (2018) - [c104]Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, Yi-Hsuan Yang:
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment. AAAI 2018: 34-41 - [c103]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Generating Music Medleys via Playing Music Puzzle Games. AAAI 2018: 2281-2288 - [c102]Chia-An Yu, Ching-Lun Tai, Tak-Shing Chan, Yi-Hsuan Yang:
Modeling Multi-way Relations with Hypergraph Embedding. CIKM 2018: 1707-1710 - [c101]Wen-Li Wei, Jen-Chun Lin, Tyng-Luh Liu, Yi-Hsuan Yang, Hsin-Min Wang, Hsiao-Rong Tyan, Hong-Yuan Mark Liao:
Seethevoice: Learning from Music to Visual Storytelling of Shots. ICME 2018: 1-6 - [c100]Yi-Wei Chen, Yi-Hsuan Yang, Homer H. Chen:
Cross-Cultural Music Emotion Recognition by Adversarial Discriminative Domain Adaptation. ICMLA 2018: 467-472 - [c99]Hao-Min Liu, Yi-Hsuan Yang:
Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network. ICMLA 2018: 722-727 - [c98]Jen-Yu Liu, Yi-Hsuan Yang:
Denoising Auto-Encoder with Recurrent Skip Connections and Residual Regression for Music Source Separation. ICMLA 2018: 773-778 - [c97]Szu-Yu Chou, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Learning to Recognize Transient Sound Events using Attentional Supervision. IJCAI 2018: 3336-3342 - [c96]Yun-Ning Hung, Yi-Hsuan Yang:
Frame-level Instrument Recognition by Timbre and Pitch. ISMIR 2018: 135-142 - [c95]Hao-Wen Dong, Yi-Hsuan Yang:
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation. ISMIR 2018: 190-196 - [i27]Tak-Shing T. Chan, Yi-Hsuan Yang:
Polar n-Complex and n-Bicomplex Singular Value Decomposition and Principal Component Pursuit. CoRR abs/1801.03773 (2018) - [i26]Tak-Shing T. Chan, Yi-Hsuan Yang:
Informed Group-Sparse Representation for Singing Voice Separation. CoRR abs/1801.03815 (2018) - [i25]Tak-Shing T. Chan, Yi-Hsuan Yang:
Complex and Quaternionic Principal Component Pursuit and Its Application to Audio Separation. CoRR abs/1801.03816 (2018) - [i24]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Pop Music Highlighter: Marking the Emotion Keypoints. CoRR abs/1802.10495 (2018) - [i23]Hao-Wen Dong, Yi-Hsuan Yang:
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation. CoRR abs/1804.09399 (2018) - [i22]Jen-Yu Liu, Yi-Hsuan Yang, Shyh-Kang Jeng:
Weakly-supervised Visual Instrument-playing Action Detection in Videos. CoRR abs/1805.02031 (2018) - [i21]Zhe-Cheng Fan, Tak-Shing T. Chan, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Backpropagation with N-D Vector-Valued Neurons Using Arbitrary Bilinear Products. CoRR abs/1805.09621 (2018) - [i20]Yun-Ning Hung, Yi-Hsuan Yang:
Frame-level Instrument Recognition by Timbre and Pitch. CoRR abs/1806.09587 (2018) - [i19]Jen-Yu Liu, Yi-Hsuan Yang:
Denoising Auto-encoder with Recurrent Skip Connections and Residual Regression for Music Source Separation. CoRR abs/1807.01898 (2018) - [i18]Cheng-Wei Wu, Jen-Yu Liu, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Singing Style Transfer Using Cycle-Consistent Boundary Equilibrium Generative Adversarial Networks. CoRR abs/1807.02254 (2018) - [i17]Hao-Min Liu, Yi-Hsuan Yang:
Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network. CoRR abs/1807.11161 (2018) - [i16]Hao-Wen Dong, Yi-Hsuan Yang:
Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation. CoRR abs/1810.04714 (2018) - [i15]Tsung-Han Hsieh, Li Su, Yi-Hsuan Yang:
A Streamlined Encoder/Decoder Architecture for Melody Extraction. CoRR abs/1810.12947 (2018) - [i14]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Multitask learning for frame-level instrument recognition. CoRR abs/1811.01143 (2018) - [i13]Yun-Ning Hung, Yi-An Chen, Yi-Hsuan Yang:
Learning Disentangled Representations for Timber and Pitch in Music Audio. CoRR abs/1811.03271 (2018) - [i12]Bryan Wang, Yi-Hsuan Yang:
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network. CoRR abs/1811.04357 (2018) - [i11]Szu-Yu Chou, Kai-Hsiang Cheng, Jyh-Shing Roger Jang, Yi-Hsuan Yang:
Learning to match transient sound events using attentional similarity for few-shot sound recognition. CoRR abs/1812.01269 (2018) - 2017
- [j27]Yuan-Pin Lin, Ping-Keng Jao, Yi-Hsuan Yang:
Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis. Frontiers Comput. Neurosci. 11: 64 (2017) - [j26]Xiao Hu, Yi-Hsuan Yang:
The mood of Chinese Pop music: Representation and recognition. J. Assoc. Inf. Sci. Technol. 68(8): 1899-1910 (2017) - [j25]Tak-Shing Chan, Yi-Hsuan Yang:
Informed Group-Sparse Representation for Singing Voice Separation. IEEE Signal Process. Lett. 24(2): 156-160 (2017) - [j24]Xiao Hu, Yi-Hsuan Yang:
Cross-Dataset and Cross-Cultural Music Mood Prediction: A Case on Western and Chinese Pop Songs. IEEE Trans. Affect. Comput. 8(2): 228-240 (2017) - [j23]Yu-An Chen, Ju-Chiang Wang, Yi-Hsuan Yang, Homer H. Chen:
Component Tying for Mixture Model Adaptation in Personalization of Music Emotion Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 25(7): 1409-1420 (2017) - [j22]Markus Schedl, Yi-Hsuan Yang, Perfecto Herrera-Boyer:
Introduction to Intelligent Music Systems and Applications. ACM Trans. Intell. Syst. Technol. 8(2): 17:1-17:8 (2017) - [c94]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Music thumbnailing via neural attention modeling of music emotion. APSIPA 2017: 347-350 - [c93]Chia-An Yu, Tak-Shing Chan, Yi-Hsuan Yang:
Low-Rank Matrix Completion over Finite Abelian Group Algebras for Context-Aware Recommendation. CIKM 2017: 2415-2418 - [c92]Lufei Gao, Li Su, Yi-Hsuan Yang, Tan Lee:
Polyphonic piano note transcription with non-negative matrix factorization of differential spectrogram. ICASSP 2017: 291-295 - [c91]Shih-Yang Su, Cheng-Kai Chiu, Li Su, Yi-Hsuan Yang:
Automatic conversion of Pop music into chiptunes for 8-bit pixel art. ICASSP 2017: 411-415 - [c90]Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen:
Revisiting the problem of audio-based hit song prediction using convolutional neural networks. ICASSP 2017: 621-625 - [c89]Ting-Wei Su, Jen-Yu Liu, Yi-Hsuan Yang:
Weakly-supervised audio event detection using event-specific Gaussian filters and fully convolutional networks. ICASSP 2017: 791-795 - [c88]Wen-Li Wei, Jen-Chun Lin, Tyng-Luh Liu, Yi-Hsuan Yang, Hsin-Min Wang, Hsiao-Rong Tyan, Hong-Yuan Mark Liao:
Deep-net fusion to classify shots in concert videos. ICASSP 2017: 1383-1387 - [c87]Szu-Yu Chou, Li-Chia Yang, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Conditional preference nets for user and item cold start problems in music recommendation. ICME 2017: 1147-1152 - [c86]Li-Chia Yang, Szu-Yu Chou, Yi-Hsuan Yang:
MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation. ISMIR 2017: 324-331 - [p2]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Affective Music Information Retrieval. Emotions and Personality in Personalized Services 2017: 227-261 - [i10]Li-Chia Yang, Szu-Yu Chou, Yi-Hsuan Yang:
MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation using 1D and 2D Conditions. CoRR abs/1703.10847 (2017) - [i9]Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen:
Revisiting the problem of audio-based hit song prediction using convolutional neural networks. CoRR abs/1704.01280 (2017) - [i8]Zhe-Cheng Fan, Tak-Shing Chan, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Music Signal Processing Using Vector Product Neural Networks. CoRR abs/1706.09555 (2017) - [i7]Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang:
Similarity Embedding Network for Unsupervised Sequential Pattern Learning by Playing Music Puzzle Games. CoRR abs/1709.04384 (2017) - [i6]Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, Yi-Hsuan Yang:
MuseGAN: Symbolic-domain Music Generation and Accompaniment with Multi-track Sequential Generative Adversarial Networks. CoRR abs/1709.06298 (2017) - [i5]Lang-Chi Yu, Yi-Hsuan Yang, Yun-Ning Hung, Yi-An Chen:
Hit Song Prediction for Pop Music by Siamese CNN with Ranking Loss. CoRR abs/1710.10814 (2017) - [i4]Chih-Ming Chen, Yi-Hsuan Yang, Yian Chen, Ming-Feng Tsai:
Vertex-Context Sampling for Weighted Network Embedding. CoRR abs/1711.00227 (2017) - 2016
- [j21]Tak-Shing Chan, Yi-Hsuan Yang:
Complex and Quaternionic Principal Component Pursuit and Its Application to Audio Separation. IEEE Signal Process. Lett. 23(2): 287-291 (2016) - [j20]Ping-Keng Jao, Li Su, Yi-Hsuan Yang, Brendt Wohlberg:
Monaural Music Source Separation Using Convolutional Sparse Coding. IEEE ACM Trans. Audio Speech Lang. Process. 24(11): 2158-2170 (2016) - [j19]Tak-Shing Chan, Yi-Hsuan Yang:
Polar $n$-Complex and $n$-Bicomplex Singular Value Decomposition and Principal Component Pursuit. IEEE Trans. Signal Process. 64(24): 6533-6544 (2016) - [c85]Mu-Heng Yang, Li Su, Yi-Hsuan Yang:
Highlighting root notes in chord recognition using cepstral features and multi-task learning. APSIPA 2016: 1-8 - [c84]Li Su, Tsung-Ying Chuang, Yi-Hsuan Yang:
Exploiting Frequency, Periodicity and Harmonicity Using Advanced Time-Frequency Concentration Techniques for Multipitch Estimation of Choir and Symphony. ISMIR 2016: 393-399 - [c83]Anna Aljanaki, Yi-Hsuan Yang, Mohammad Soleymani:
Emotion in Music task: Lessons Learned. MediaEval 2016 - [c82]Jen-Yu Liu, Yi-Hsuan Yang:
Event Localization in Music Auto-tagging. ACM Multimedia 2016: 1048-1057 - [c81]Chih-Ming Chen, Ming-Feng Tsai, Yu-Ching Lin, Yi-Hsuan Yang:
Query-based Music Recommendations via Preference Embedding. RecSys 2016: 79-82 - [c80]Szu-Yu Chou, Yi-Hsuan Yang, Jyh-Shing Roger Jang, Yu-Ching Lin:
Addressing Cold Start for Next-song Recommendation. RecSys 2016: 115-118 - [p1]Yi-Hsuan Yang, Ju-Chiang Wang, Yu-An Chen, Homer H. Chen:
Model Adaptation for Personalized Music Emotion Recognition. Handbook of Pattern Recognition and Computer Vision 2016: 175-193 - [i3]Kai-Chun Hsu, Szu-Yu Chou, Yi-Hsuan Yang, Tai-Shih Chi:
Neural Network Based Next-Song Recommendation. CoRR abs/1606.07722 (2016) - [i2]Jen-Yu Liu, Shyh-Kang Jeng, Yi-Hsuan Yang:
Applying Topological Persistence in Convolutional Neural Network for Music Audio Signals. CoRR abs/1608.07373 (2016) - 2015
- [j18]Ping-Keng Jao, Yi-Hsuan Yang:
Music Annotation and Retrieval using Unlabeled Exemplars: Correlation and Sparse Codes. IEEE Signal Process. Lett. 22(10): 1771-1775 (2015) - [j17]Che-Yuan Liang, Li Su, Yi-Hsuan Yang:
Musical Onset Detection Using Constrained Linear Reconstruction. IEEE Signal Process. Lett. 22(11): 2142-2146 (2015) - [j16]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, Shyh-Kang Jeng:
Modeling the Affective Content of Music with a Gaussian Mixture Model. IEEE Trans. Affect. Comput. 6(1): 56-68 (2015) - [j15]Mohammad Soleymani, Yi-Hsuan Yang, Go Irie, Alan Hanjalic:
Guest Editorial: Challenges and Perspectives for Affective Analysis in Multimedia. IEEE Trans. Affect. Comput. 6(3): 206-208 (2015) - [j14]Li Su, Yi-Hsuan Yang:
Combining Spectral and Temporal Representations for Multipitch Estimation of Polyphonic Music. IEEE ACM Trans. Audio Speech Lang. Process. 23(10): 1600-1612 (2015) - [j13]Yi-Hsuan Yang, Yuan-Ching Teng:
Quantitative Study of Music Listening Behavior in a Smartphone Context. ACM Trans. Interact. Intell. Syst. 5(3): 14:1-14:30 (2015) - [c79]Li Su, Yi-Hsuan Yang:
Escaping from the Abyss of Manual Annotation: New Methodology of Building Polyphonic Datasets for Automatic Music Transcription. CMMR 2015: 309-321 - [c78]Ping-Keng Jao, Yuan-Pin Lin, Yi-Hsuan Yang, Tzyy-Ping Jung:
Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification. EMBC 2015: 570-573 - [c77]Ping-Keng Jao, Yi-Hsuan Yang, Brendt Wohlberg:
Informed monaural source separation of music based on convolutional sparse coding. ICASSP 2015: 236-240 - [c76]Yu-An Chen, Yi-Hsuan Yang, Ju-Chiang Wang, Homer H. Chen:
The AMG1608 dataset for music emotion recognition. ICASSP 2015: 693-697 - [c75]Tak-Shing Chan, Tzu-Chun Yeh, Zhe-Cheng Fan, Hung-Wei Chen, Li Su, Yi-Hsuan Yang, Jyh-Shing Roger Jang:
Vocal activity informed singing voice separation with the iKala dataset. ICASSP 2015: 718-722 - [c74]Szu-Yu Chou, Yi-Hsuan Yang, Yu-Ching Lin:
Evaluating music recommendation in a real-world setting: On data splitting and evaluation metrics. ICME 2015: 1-6 - [c73]Che-Yuan Liang, Li Su, Yi-Hsuan Yang, Hsin-Ming Lin:
Musical Offset Detection of Pitched Instruments: The Case of Violin. ISMIR 2015: 281-287 - [c72]Yin-Jyun Luo, Li Su, Yi-Hsuan Yang, Tai-Shih Chi:
Detection of Common Mistakes in Novice Violin Playing. ISMIR 2015: 316-322 - [c71]Yuan-Ping Chen, Li Su, Yi-Hsuan Yang:
Electric Guitar Playing Technique Detection in Real-World Recording Based on F0 Sequence Pattern Recognition. ISMIR 2015: 708-714 - [c70]Pei-Ching Li, Li Su, Yi-Hsuan Yang, Alvin W. Y. Su:
Analysis of Expressive Musical Terms in Violin Using Score-Informed and Expression-Based Audio Features. ISMIR 2015: 809-815 - [c69]Anna Aljanaki, Yi-Hsuan Yang, Mohammad Soleymani:
Emotion in Music Task at MediaEval 2015. MediaEval 2015 - [c68]Jheng-Wei Peng, Shih-Wei Sun, Wen-Huang Cheng, Yi-Hsuan Yang:
eMosic: Mobile Media Pushing through Social Emotion Sensing. ACM Multimedia 2015: 753-754 - [c67]Mohammad Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang:
ASM'15: The 1st International Workshop on Affect and Sentiment in Multimedia. ACM Multimedia 2015: 1349 - [c66]Chih-Ming Chen, Po-Chuan Chien, Yu-Ching Lin, Ming-Feng Tsai, Yi-Hsuan Yang:
Do You Have a Pop Face? Here is a Pop Song. Using Profile Pictures to Mitigate the Cold-start Problem in Music Recommender Systems. RecSys Posters 2015 - [e3]Martha A. Larson, Bogdan Ionescu, Mats Sjöberg, Xavier Anguera, Johann Poignant, Michael Riegler, Maria Eskevich, Claudia Hauff, Richard F. E. Sutcliffe, Gareth J. F. Jones, Yi-Hsuan Yang, Mohammad Soleymani, Symeon Papadopoulos:
Working Notes Proceedings of the MediaEval 2015 Workshop, Wurzen, Germany, September 14-15, 2015. CEUR Workshop Proceedings 1436, CEUR-WS.org 2015 [contents] - [e2]Mohammad Soleymani, Yi-Hsuan Yang, Yu-Gang Jiang, Shih-Fu Chang:
Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia, ASM 2015, Brisbane, Australia, October 30, 2015. ACM 2015, ISBN 978-1-4503-3750-2 [contents] - [i1]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Affective Music Information Retrieval. CoRR abs/1502.05131 (2015) - 2014
- [j12]Li Su, Hsin-Ming Lin, Yi-Hsuan Yang:
Sparse modeling of magnitude and phase-derived spectra for playing technique classification. IEEE ACM Trans. Audio Speech Lang. Process. 22(12): 2122-2132 (2014) - [j11]Li Su, Chin-Chia Michael Yeh, Jen-Yu Liu, Ju-Chiang Wang, Yi-Hsuan Yang:
A Systematic Evaluation of the Bag-of-Frames Representation for Music Information Retrieval. IEEE Trans. Multim. 16(5): 1188-1200 (2014) - [c65]Chin-Chia Michael Yeh, Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Improving music auto-tagging by intra-song instance bagging. ICASSP 2014: 2139-2143 - [c64]Yu-An Chen, Ju-Chiang Wang, Yi-Hsuan Yang, Homer H. Chen:
Linear regression-based adaptation of music emotion recognition models for personalization. ICASSP 2014: 2149-2153 - [c63]Ping-Keng Jao, Chin-Chia Michael Yeh, Yi-Hsuan Yang:
Modified lasso screening for audio word-based music classification using large-scale dictionary. ICASSP 2014: 5207-5211 - [c62]Li-Fan Yu, Li Su, Yi-Hsuan Yang:
Sparse cepstral codes and power scale for instrument identification. ICASSP 2014: 7460-7464 - [c61]Chih-Ming Chen, Hsin-Ping Chen, Ming-Feng Tsai, Yi-Hsuan Yang:
Leverage Item Popularity and Recommendation Quality via Cost-Sensitive Factorization Machines. ICDM Workshops 2014: 1158-1162 - [c60]Xiao Hu, Yi-Hsuan Yang:
A Study on Cross-cultural and Cross-dataset Generalizability of Music Mood Regression Models. ICMC 2014 - [c59]Li Su, Yi-Hsuan Yang:
Power-Scaled Spectral Flux and Peak-Valley Group-Delay Methods for Robust Musical Onset Detection. ICMC 2014 - [c58]Li Su, Li-Fan Yu, Yi-Hsuan Yang, Hsin-Yu Lai:
Resolving Octave Ambiguities: A Cross-dataset Investigation. ICMC 2014 - [c57]Wei-Chih Lin, Shih-Wei Sun, Wen-Huang Cheng, Yi-Hsuan Yang, Kai-Lung Hua, Fujui Wang, Jun-Jieh Wang:
Attaching-music: An interactive music delivery system for private listening as wherever you go. ICME Workshops 2014: 1-2 - [c56]Jen-Yu Liu, Sung-Yen Liu, Yi-Hsuan Yang:
LJ2M dataset: Toward better understanding of music listening behavior and user mood. ICME 2014: 1-6 - [c55]Shuo-Yang Wang, Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang:
Towards time-varying music auto-tagging based on CAL500 expansion. ICME 2014: 1-6 - [c54]Che-Hua Yeh, Yi-Hsuan Yang, Ming-Hsu Chang, Hong-Yuan Mark Liao:
Music Driven Human Motion Manipulation for Characters in a Video. ISM 2014: 241-244 - [c53]Li Su, Li-Fan Yu, Yi-Hsuan Yang:
Sparse Cepstral, Phase Codes for Guitar Playing Technique Classification. ISMIR 2014: 9-14 - [c52]Ju-Chiang Wang, Ming-Chi Yen, Yi-Hsuan Yang, Hsin-Min Wang:
Automatic Set List Identification and Song Segmentation for Full-Length Concert Videos. ISMIR 2014: 239-244 - [c51]Xiao Hu, Yi-Hsuan Yang:
Cross-cultural mood regression for music digital libraries. JCDL 2014: 471-472 - [c50]Anna Aljanaki, Yi-Hsuan Yang, Mohammad Soleymani:
Emotion in Music Task at MediaEval 2014. MediaEval 2014 - [c49]Chin-Chia Michael Yeh, Ping-Keng Jao, Yi-Hsuan Yang:
AWtoolbox: Characterizing Audio Information Using Audio Words. ACM Multimedia 2014: 809-812 - [c48]Mohammad Soleymani, Anna Aljanaki, Yi-Hsuan Yang, Michael N. Caro, Florian Eyben, Konstantin Markov, Björn W. Schuller, Remco C. Veltkamp, Felix Weninger, Frans Wiering:
Emotional Analysis of Music: A Comparison of Methods. ACM Multimedia 2014: 1161-1164 - [e1]Hsin-Min Wang, Yi-Hsuan Yang, Jin Ha Lee:
Proceedings of the 15th International Society for Music Information Retrieval Conference, ISMIR 2014, Taipei, Taiwan, October 27-31, 2014. 2014 [contents] - 2013
- [j10]Keng-Sheng Lin, Ann Lee, Yi-Hsuan Yang, Cheng-Te Lee, Homer H. Chen:
Automatic highlights extraction for drama video using music emotion and human face features. Neurocomputing 119: 111-117 (2013) - [j9]Yi-Hsuan Yang, Jen-Yu Liu:
Quantitative Study of Music Listening Behavior in a Social and Affective Context. IEEE Trans. Multim. 15(6): 1304-1315 (2013) - [c47]Ping-Keng Jao, Li Su, Yi-Hsuan Yang:
Analyzing the dictionary properties and sparsity constraints for a dictionary-based music genre classification system. APSIPA 2013: 1-8 - [c46]Chin-Chia Michael Yeh, Yi-Hsuan Yang:
Towards a more efficient sparse coding based audio-word feature extraction system. APSIPA 2013: 1-7 - [c45]Chin-Chia Michael Yeh, Li Su, Yi-Hsuan Yang:
Dual-layer bag-of-frames model for music genre classification. ICASSP 2013: 246-250 - [c44]Cheng-Ya Sha, Yi-Hsuan Yang, Yu-Ching Lin, Homer H. Chen:
Singing voice timbre classification of Chinese popular music. ICASSP 2013: 734-738 - [c43]Yuan-Ching Teng, Ying-Shu Kuo, Yi-Hsuan Yang:
A large in-situ dataset for context-aware music recommendation on smartphones. ICME Workshops 2013: 1-4 - [c42]Yi-Hsuan Yang:
Towards real-time music auto-tagging using sparse features. ICME 2013: 1-6 - [c41]Li Su, Yi-Hsuan Yang:
Sparse Modeling for Artist Identification: Exploiting Phase Information and Vocal Separation. ISMIR 2013: 349-354 - [c40]Yi-Hsuan Yang:
Low-Rank Representation of Both Singing Voice and Music Accompaniment Via Learned Dictionaries. ISMIR 2013: 427-432 - [c39]Mohammad Soleymani, Michael N. Caro, Erik M. Schmidt, Yi-Hsuan Yang:
The MediaEval 2013 Brave New Task: Emotion in Music. MediaEval 2013 - [c38]Mohammad Soleymani, Michael N. Caro, Erik M. Schmidt, Cheng-Ya Sha, Yi-Hsuan Yang:
1000 songs for emotional analysis of music. CrowdMM@ACM Multimedia 2013: 1-6 - [c37]Chih-Ming Chen, Ming-Feng Tsai, Jen-Yu Liu, Yi-Hsuan Yang:
Using emotional context from article for contextual music recommendation. ACM Multimedia 2013: 649-652 - [c36]Chih-Ming Chen, Ming-Feng Tsai, Jen-Yu Liu, Yi-Hsuan Yang:
Music Recommendation Based on Multiple Contextual Similarity Information. Web Intelligence 2013: 65-72 - 2012
- [j8]Yi-Hsuan Yang, Homer H. Chen:
Machine Recognition of Music Emotion: A Review. ACM Trans. Intell. Syst. Technol. 3(3): 40:1-40:30 (2012) - [j7]Cheng-Te Lee, Yi-Hsuan Yang, Homer H. Chen:
Multipitch Estimation of Piano Music by Exemplar-Based Sparse Representation. IEEE Trans. Multim. 14(3-1): 608-618 (2012) - [c35]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, Shyh-Kang Jeng:
Personalized music emotion recognition via model adaptation. APSIPA 2012: 1-7 - [c34]Yi-Hsuan Yang, Xiao Hu:
Cross-cultural Music Mood Classification: A Comparison on English and Chinese Songs. ISMIR 2012: 19-24 - [c33]Chin-Chia Michael Yeh, Yi-Hsuan Yang:
Supervised dictionary learning for music genre classification. ICMR 2012: 55 - [c32]Jen-Yu Liu, Yi-Hsuan Yang:
Inferring personal traits from music listening history. MIRUM 2012: 31-36 - [c31]Ju-Chiang Wang, Yi-Hsuan Yang, Kaichun Chang, Hsin-Min Wang, Shyh-Kang Jeng:
Exploring the relationship between categorical and dimensional emotion semantics of music. MIRUM 2012: 63-68 - [c30]Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, Shyh-Kang Jeng:
The acoustic emotion gaussians model for emotion-based music annotation and retrieval. ACM Multimedia 2012: 89-98 - [c29]Yi-Hsuan Yang:
On sparse and low-rank matrix decomposition for singing voice separation. ACM Multimedia 2012: 757-760 - [c28]Jen-Yu Liu, Chin-Chia Michael Yeh, Yi-Hsuan Yang, Yuan-Ching Teng:
Bilingual analysis of song lyrics and audio words. ACM Multimedia 2012: 829-832 - [c27]Ju-Chiang Wang, Yi-Hsuan Yang, I-Hong Jhuo, Yen-Yu Lin, Hsin-Min Wang:
The acousticvisual emotion guassians model for automatic generation of music video. ACM Multimedia 2012: 1379-1380 - [c26]Yi-Hsuan Yang, Dmitry Bogdanov, Perfecto Herrera, Mohamed Sordo:
Music retagging using label propagation and robust principal component analysis. WWW (Companion Volume) 2012: 869-876 - 2011
- [j6]Yi-Hsuan Yang, Homer H. Chen:
Ranking-Based Emotion Recognition for Music Organization and Retrieval. IEEE Trans. Speech Audio Process. 19(4): 762-774 (2011) - [j5]Yi-Hsuan Yang, Homer H. Chen:
Prediction of the Distribution of Perceived Music Emotions Using Discrete Samples. IEEE Trans. Speech Audio Process. 19(7): 2184-2196 (2011) - [j4]Yu-Ching Lin, Yi-Hsuan Yang, Homer H. Chen:
Exploiting online music tags for music emotion classification. ACM Trans. Multim. Comput. Commun. Appl. 7(Supplement): 26 (2011) - [c25]Yin-Hsi Kuo, Hsuan-Tien Lin, Wen-Huang Cheng, Yi-Hsuan Yang, Winston H. Hsu:
Unsupervised auxiliary visual words discovery for large-scale image object retrieval. CVPR 2011: 905-912 - [c24]Cheng-Te Lee, Yi-Hsuan Yang, Homer H. Chen:
Automatic transcription of piano music by sparse representation of magnitude spectra. ICME 2011: 1-6 - [c23]Keng-Sheng Lin, Ann Lee, Yi-Hsuan Yang, Cheng-Te Lee, Homer H. Chen:
Automatic highlights extraction for drama video using music emotion and human face features. MMSP 2011: 1-6 - 2010
- [c22]Yin-Hsi Kuo, Yi-Lun Wu, Kuan-Ting Chen, Yi-Hsuan Yang, Tzu-Hsuan Chiu, Winston H. Hsu:
A technical demonstration of large-scale image object retrieval by efficient query evaluation and effective auxiliary visual feature discovery. ACM Multimedia 2010: 1559-1562
2000 – 2009
- 2009
- [j3]Yi-Hsuan Yang, Winston H. Hsu, Homer H. Chen:
Online Reranking via Ordinal Informative Concepts for Context Fusion in Concept Detection and Video Search. IEEE Trans. Circuits Syst. Video Technol. 19(12): 1880-1890 (2009) - [j2]Ya-Fan Su, Yi-Hsuan Yang, Meng-Ting Lu, Hsin-Hsi Chen:
Smooth Control of Adaptive Media Playout for Video Streaming. IEEE Trans. Multim. 11(7): 1331-1339 (2009) - [c21]Yi-Hsuan Yang, Homer H. Chen:
Music emotion ranking. ICASSP 2009: 1657-1660 - [c20]Yu-Ching Lin, Yi-Hsuan Yang, Homer H. Chen, I-Bin Liao, Yeh-Chin Ho:
Exploiting genre for music emotion classification. ICME 2009: 618-621 - [c19]Yi-Hsuan Yang, Yu-Ching Lin, Homer H. Chen:
Clustering for music search results. ICME 2009: 874-877 - [c18]Heng Tze Cheng, Yi-Hsuan Yang, Yu-Ching Lin, Homer H. Chen:
Multimodal Structure Segmentation and Analysis of Music using Audio and Textual Information. ISCAS 2009: 1677-1680 - [c17]Yi-Hsuan Yang, Yu-Ching Lin, Ann Lee, Homer H. Chen:
Improving Musical Concept Detection by Ordinal Regression and Context Fusion. ISMIR 2009: 147-152 - [c16]Min-Yian Su, Yi-Hsuan Yang, Yu-Ching Lin, Homer H. Chen:
An Integrated Approach to Music Boundary Detection. ISMIR 2009: 705-710 - [c15]Liang-Chi Hsieh, Kuan-Ting Chen, Chien-Hsing Chiang, Yi-Hsuan Yang, Guan-Long Wu, Chun-Sung Ferng, Hsiu-Wen Hsueh, Angela Charng-Rurng Tsai, Winston H. Hsu:
Canonical image selection and efficient image graph construction for large-scale flickr photos. ACM Multimedia 2009: 1121-1122 - [c14]Yi-Hsuan Yang, Yu-Ching Lin, Homer H. Chen:
Personalized music emotion recognition. SIGIR 2009: 748-749 - 2008
- [j1]Yi-Hsuan Yang, Yu-Ching Lin, Ya-Fan Su, Homer H. Chen:
A Regression Approach to Music Emotion Recognition. IEEE Trans. Speech Audio Process. 16(2): 448-457 (2008) - [c13]Yi-Hsuan Yang, Winston H. Hsu:
Video search reranking via online ordinal reranking. ICME 2008: 285-288 - [c12]Heng Tze Cheng, Yi-Hsuan Yang, Yu-Ching Lin, I-Bin Liao, Homer H. Chen:
Automatic chord recognition for music classification and retrieval. ICME 2008: 1505-1508 - [c11]Yi-Hsuan Yang, Po Tun Wu, Ching-Wei Lee, Kuan Hung Lin, Winston H. Hsu, Homer H. Chen:
ContextSeer: context search and recommendation at query time for shared consumer photos. ACM Multimedia 2008: 199-208 - [c10]Po Tun Wu, Yi-Hsuan Yang, Kuan-Ting Chen, Winston H. Hsu, Tien-Hsu Lee, Chun Jen Lee:
Keyword-based concept search on consumer photos by web-based kernel function. ACM Multimedia 2008: 651-654 - [c9]Yi-Hsuan Yang, Yu-Ching Lin, Heng Tze Cheng, Homer H. Chen:
Mr. Emo: music retrieval in the emotion plane. ACM Multimedia 2008: 1003-1004 - [c8]Tien-Lin Wu, Hsuan-Kai Wang, Chien-Chang Ho, Yuan-Pin Lin, Ting-Ting Hu, Ming-Fang Weng, Li-Wei Chan, Changhua Yang, Yi-Hsuan Yang, Yi-Ping Hung, Yung-Yu Chuang, Hsin-Hsi Chen, Homer H. Chen, Jyh-Horng Chen, Shyh-Kang Jeng:
Interactive content presentation based on expressed emotion and physiological feedback. ACM Multimedia 2008: 1009-1010 - [c7]Yi-Hsuan Yang, Yu-Ching Lin, Heng Tze Cheng, I-Bin Liao, Yeh-Chin Ho, Homer H. Chen:
Toward Multi-modal Music Emotion Classification. PCM 2008: 70-79 - 2007
- [c6]Yi-Hsuan Yang, Yu-Ching Lin, Ya-Fan Su, Homer H. Chen:
Music Emotion Classification: A Regression Approach. ICME 2007: 208-211 - [c5]Yi-Hsuan Yang, Ya-Fan Su, Yu-Ching Lin, Homer H. Chen:
Music emotion recognition: the role of individuality. HCM@MM 2007: 13-22 - [c4]Ming-Fang Weng, Chun-Kang Chen, Yi-Hsuan Yang, Rong-En Fan, Yu-Ting Hsieh, Yung-Yu Chuang, Winston H. Hsu, Chih-Jen Lin:
The NTU Toolkit and Framework for High-Level Feature Detection at TRECVID 2007. TRECVID 2007 - 2006
- [c3]Yi-Hsuan Yang, Meng-Ting Lu, Homer H. Chen:
Smooth Playout Control for Video Streaming over Error-Prone Channels. ISM 2006: 415-418 - [c2]Chia Chu Liu, Yi-Hsuan Yang, Ping-Hao Wu, Homer H. Chen:
Detecting and Classifying Emotion in Popular Music. JCIS 2006 - [c1]Yi-Hsuan Yang, Chia Chu Liu, Homer H. Chen:
Music emotion classification: a fuzzy approach. ACM Multimedia 2006: 81-84
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-13 23:51 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint