Total
-
Rethinking Batch Normalization in TransformersARXIV/NLP 2020. 3. 25. 19:24
https://arxiv.org/abs/2003.07845v1 Rethinking Batch Normalization in Transformers The standard normalization method for neural network (NN) models used in Natural Language Processing (NLP) is layer normalization (LN). This is different than batch normalization (BN), which is widely-adopted in Computer Vision. The preferred use of LN in arxiv.org abstract NLP에서 사용되는 Neural network 모델의 표준 정규화 방법은 ..
-
SASL: Saliency-Adaptive Sparsity Learning for Neural Network AccelerationARXIV/Convolution Neural Network 2020. 3. 17. 11:15
https://arxiv.org/abs/2003.05891v1 SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration Accelerating the inference speed of CNNs is critical to their deployment in real-world applications. Among all the pruning approaches, those implementing a sparsity learning framework have shown to be effective as they learn and prune the models in an end- arxiv.org abstract 본 논문은 CNN의 추론..
-
Bayesian Deep Learning and a Probabilistic Perspective of GeneralizationARXIV/Neural Network 2020. 3. 17. 10:43
https://arxiv.org/abs/2002.08791v1 Bayesian Deep Learning and a Probabilistic Perspective of Generalization The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights. Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically und arxiv.org Abstract Bayesin 접근법의 핵..
-
Orderless Recurrent Models for Multi-label ClassificationARXIV/Recurrent Neural Network 2020. 3. 16. 11:10
https://arxiv.org/abs/1911.09996v3 Orderless Recurrent Models for Multi-label Classification Recurrent neural networks (RNN) are popular for many computer vision tasks, including multi-label classification. Since RNNs produce sequential outputs, labels need to be ordered for the multi-label classification task. Current approaches sort labels accor arxiv.org abstract 본 논문은 예측된 레이블 순서에 따른 ground t..
-
Hand Segmentation and Fingertip Tracking from Depth Camera Images Using Deep Convolutional Neural Network and Multi-task SegNetARXIV/Convolution Neural Network 2020. 3. 13. 13:17
https://arxiv.org/abs/1901.03465v3 Hand Segmentation and Fingertip Tracking from Depth Camera Images Using Deep Convolutional Neural Network and Multi-task SegNet Hand segmentation and fingertip detection play an indispensable role in hand gesture-based human-machine interaction systems. In this study, we propose a method to discriminate hand components and to locate fingertips in RGB-D images. ..
-
Two-sample Testing Using Deep LearningARXIV/Neural Network 2020. 3. 11. 22:32
https://arxiv.org/abs/1910.06239v2 Two-sample Testing Using Deep Learning We propose a two-sample testing procedure based on learned deep neural network representations. To this end, we define two test statistics that perform an asymptotic location test on data samples mapped onto a hidden layer. The tests are consistent and asy arxiv.org abstract 본 논문은 학습된 Neural Network 표현을 기반으로 2-샘플 테스트 절차를 제..
-
Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligenceARXIV/IT 2020. 3. 10. 12:00
https://arxiv.org/abs/2002.04803v1 Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligen Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processi..
-
On Feature Normalization and Data AugmentationARXIV/Neural Network 2020. 3. 10. 11:31
https://arxiv.org/abs/2002.11102v2 On Feature Normalization and Data Augmentation Modern neural network training relies heavily on data augmentation for improved generalization. After the initial success of label-preserving augmentations, there has been a recent surge of interest in label-perturbing approaches, which combine features an arxiv.org abstract 본 논문은 특징 일반화에 의해 추출되고 첫번째와 두번째 모메트를 활용하는..
-
AI outperformed every dermatologist: Improved dermoscopic melanoma diagnosis through customizing batch logic and loss function in an optimized Deep CNN architectureARXIV/Convolution Neural Network 2020. 3. 9. 14:11
https://arxiv.org/abs/2003.02597v1 AI outperformed every dermatologist: Improved dermoscopic melanoma diagnosis through customizing batch logic and loss function i Melanoma, one of most dangerous types of skin cancer, re-sults in a very high mortality rate. Early detection and resection are two key points for a successful cure. Recent research has used artificial intelligence to classify melanom..
-
Pruning Filters while Training for Efficiently Optimizing Deep Learning NetworksARXIV/Convolution Neural Network 2020. 3. 8. 14:53
https://arxiv.org/abs/2003.02800v1 Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks Modern deep networks have millions to billions of parameters, which leads to high memory and energy requirements during training as well as during inference on resource-constrained edge devices. Consequently, pruning techniques have been proposed that remo arxiv.org abstract 현대의 딥 ..