python - Caffe network getting very low loss but very bad accuracy in testing -
python - Caffe network getting very low loss but very bad accuracy in testing -
i'm new caffe, , i'm getting unusual behavior. i'm trying utilize fine tuning on bvlc_reference_caffenet accomplish ocr task.
i've taken pretrained net, changed lastly fc layer number of output classes have, , retrained. after few one thousand iterations i'm getting loss rates of ~.001, , accuracy on 90 percent when network tests. said, when seek run network on info myself, awful results, not exceeding 7 or 8 percent.
the code i'm using run net is:
[imports] net = caffe.classifier('bvlc_reference_caffenet/deploy.prototxt', 'bvlc_reference_caffenet/caffenet_train_iter_28000.caffemodel', image_dims=(227, 227, 1)) input_image = caffe.io.load_image('/training_processed/6/0.png') prediction = net.predict([input_image]) # predict takes number of images, , formats them caffe net automatically cls = prediction[0].argmax() any thoughts on why performance might poor?
thanks!
ps: additional info may or not of use. when classifying shown below, classifier seems favor classes. though have 101 class problem, seems assign max of 15 different classes
pps: i'm i'm not overfitting. i've been testing along way snapshots , exhibit same poor results.
your code testing model posted seem miss components:
it seems did not subtract image's mean. you did not swap channels rgb bgr. you did not scale inputs [0..255] range.looking @ similar instances of caffe.classifier may see like:
net = caffe.classifier('bvlc_reference_caffenet/deploy.prototxt', 'bvlc_reference_caffenet/caffenet_train_iter_28000.caffemodel', mean = np.load( 'ilsvrc_2012_mean.npy' ), input_scale=1.0, raw_scale=255, channel_swap=(2,1,0), image_dims=(227, 227, 1)) it crucial have same input transformation in test in training.
python image-processing computer-vision deep-learning caffe
Comments
Post a Comment