Juhan Nam
Cited by
Cited by
Multimodal deep learning
J Ngiam, A Khosla, M Kim, J Nam, H Lee, AY Ng
Proceedings of the 28th international conference on machine learning (ICML …, 2011
Sample-level deep convolutional neural networks for music auto-tagging using raw waveforms
J Lee, J Park, KL Kim, J Nam
SMC, 220-226, 2017
SampleCNN: End-to-end deep convolutional neural networks using very small filters for music classification
J Lee, J Park, KL Kim, J Nam
Applied Sciences 8 (1), 150, 2018
Multi-level and multi-scale feature aggregation using pretrained convolutional neural networks for music auto-tagging
J Lee, J Nam
IEEE signal processing letters 24 (8), 1208-1212, 2017
Deep learning for audio-based music classification and tagging: Teaching computers to distinguish rock from bach
J Nam, K Choi, J Lee, SY Chou, YH Yang
IEEE signal processing magazine 36 (1), 41-51, 2018
Sample-level CNN architectures for music auto-tagging using raw waveforms
T Kim, J Lee, J Nam
ICASSP, 366-370, 2018
A Classification-Based Polyphonic Piano Transcription Approach Using Learned Feature Representations.
J Nam, J Ngiam, H Lee, M Slaney
ISMIR, 175-180, 2011
Systems and methods for evaluating strength of an audio password
LH Kim, J Nam, E Visser
US Patent 10,157,272, 2018
Learning Sparse Feature Representations for Music Annotation and Retrieval.
J Nam, J Herrera, M Slaney, JO Smith III
ISMIR, 565-570, 2012
Raw waveform-based audio classification using sample-level CNN architectures
J Lee, T Kim, J Park, J Nam
Machine Learning for Audio Signal Processing Workshop, NIPS, 2017
Comparison and analysis of SampleCNN architectures for audio classification
T Kim, J Lee, J Nam
IEEE Journal of Selected Topics in Signal Processing 13 (2), 285-297, 2019
EMOPIA: a multi-modal pop piano dataset for emotion recognition and emotion-based music generation
HT Hung, J Ching, S Doh, N Kim, J Nam, YH Yang
ISMIR, 318-325, 2021
Representation learning of music using artist labels
J Park, J Lee, J Park, JW Ha, J Nam
ISMIR, 717-724, 2018
Melody Extraction on Vocal Segments Using Multi-Column Deep Neural Networks.
S Kum, C Oh, J Nam
ISMIR 2016, 171, 2016
Joint detection and classification of singing voice melody using convolutional recurrent neural networks
S Kum, J Nam
Applied Sciences 9 (7), 1324, 2019
Systems and methods for audio signal processing
E Visser, LH Kim, Y Guo, J Nam
US Patent App. 13/828,415, 2013
VirtuosoNet: A Hierarchical RNN-based System for Modeling Expressive Piano Performance.
D Jeong, T Kwon, Y Kim, K Lee, J Nam
ISMIR, 908-915, 2019
Graph neural network for music score data and modeling expressive piano performance
D Jeong, T Kwon, Y Kim, J Nam
ICML, 3060-3070, 2019
Zero-shot learning for audio-based music classification and tagging
J Choi, J Lee, J Park, J Nam
ISMIR, 67-74, 2019
Disentangled multidimensional metric learning for music similarity
J Lee, NJ Bryan, J Salamon, Z Jin, J Nam
ICASSP, 6-10, 2020
The system can't perform the operation now. Try again later.
Articles 1–20