CNN
-
EfficientNet: Rethinking Model Scaling for Convolutional Neural NetworksARXIV/Convolution Neural Network 2020. 5. 13. 17:13
https://arxiv.org/abs/1905.11946v3 EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing n arxiv.org abstract CNN의 경우 제한된 자원..
-
Focal Loss for Dense Object DetectionARXIV/Convolution Neural Network 2020. 5. 4. 20:40
https://arxiv.org/abs/1708.02002 Focal Loss for Dense Object Detection The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampl arxiv.org abstract 높은 정확도의 object detector는 two stage 방법으로 부족한 객체에 대한 ..
-
Frustratingly Simple Few-Shot Object DetectionARXIV/Convolution Neural Network 2020. 3. 31. 15:56
https://arxiv.org/abs/2003.06957v1 Frustratingly Simple Few-Shot Object Detection Detecting rare objects from a few examples is an emerging problem. Prior works show meta-learning is a promising approach. But, fine-tuning techniques have drawn scant attention. We find that fine-tuning only the last layer of existing detectors on rare cl arxiv.org abstract 적은 예시에서 희귀한 물체를 찾아내는것이 대두되고 있다. 선행 연구에 따..
-
SASL: Saliency-Adaptive Sparsity Learning for Neural Network AccelerationARXIV/Convolution Neural Network 2020. 3. 17. 11:15
https://arxiv.org/abs/2003.05891v1 SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration Accelerating the inference speed of CNNs is critical to their deployment in real-world applications. Among all the pruning approaches, those implementing a sparsity learning framework have shown to be effective as they learn and prune the models in an end- arxiv.org abstract 본 논문은 CNN의 추론..
-
AI outperformed every dermatologist: Improved dermoscopic melanoma diagnosis through customizing batch logic and loss function in an optimized Deep CNN architectureARXIV/Convolution Neural Network 2020. 3. 9. 14:11
https://arxiv.org/abs/2003.02597v1 AI outperformed every dermatologist: Improved dermoscopic melanoma diagnosis through customizing batch logic and loss function i Melanoma, one of most dangerous types of skin cancer, re-sults in a very high mortality rate. Early detection and resection are two key points for a successful cure. Recent research has used artificial intelligence to classify melanom..
-
Pruning Filters while Training for Efficiently Optimizing Deep Learning NetworksARXIV/Convolution Neural Network 2020. 3. 8. 14:53
https://arxiv.org/abs/2003.02800v1 Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks Modern deep networks have millions to billions of parameters, which leads to high memory and energy requirements during training as well as during inference on resource-constrained edge devices. Consequently, pruning techniques have been proposed that remo arxiv.org abstract 현대의 딥 ..
-
Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter TheoryARXIV/Convolution Neural Network 2020. 3. 3. 14:29
https://arxiv.org/abs/2002.10674v1 Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory Batch Normalization (BatchNorm) is commonly used in Convolutional Neural Networks (CNNs) to improve training speed and stability. However, there is still limited consensus on why this technique is effective. This paper uses concepts from the ..