用手机1:1拍的数据集,之后用我前边一篇博客OpenCV修改图片尺寸把数据集转换成28*28大小。再用之前一篇博客OpenCV制作CNTK数据集,提取灰度图像素值写入文件。并从创建好的txt里拿出一些当做测试集。
大概思路就是这样。图片拍的不多,最后训练集179张,测试集14张。最后有实验结果的截图。
数据集截图:
CNTK配置文件:
rootDir = ".."
configDir = "$rootDir$/Config"
dataDir = "$rootDir$/Data"
outputDir = "$rootDir$/Output"
modelDir = "$outputDir$/Models"
deviceId = -1
command = train:test
precision = "float"
modelPath = "$modelDir$/02_Convolution"
traceLevel = 1
numMBsToShowResult = 100
#######################################
# TRAINING CONFIG #
#######################################
train = [
action = "train"
NDLNetworkBuilder = [
imageLayout = "cudnn"
initOnCPUOnly = true
ndlMacros = "$configDir$/Macros.ndl"
networkDescription = "$ConfigDir$/leaf.ndl"
]
SGD = [
epochSize = 60000
minibatchSize = 32
#learningRatesPerSample = 0.003125 # TODO
#momentumAsTimeConstant = 0
learningRatesPerMB = 0.1*5:0.3
momentumPerMB = 0*4:0.7
maxEpochs = 15
]
reader = [
readerType = "CNTKTextFormatReader"
file = "$DataDir$/trainimageleaf.txt"
input = [
features = [
dim = 784
format = "dense"
]
labels = [
dim = 4
format = "dense"
]
]
]
]
#######################################
# TEST CONFIG #
#######################################
test = [
action = test
minibatchSize = 15
reader = [
readerType = "CNTKTextFormatReader"
file = "$DataDir$/testimageleaf.txt"
input = [
features = [
dim = 784
format = "dense"
]
labels = [
dim = 4
format = "dense"
]
]
]
]
NDL网络配置(LeNet-5):
load = ndlMnistMacros
run = DNN
ndlMnistMacros = [
imageW = 28
imageH = 28
labelDim = 4
features = ImageInput(imageW, imageH, 1, imageLayout=$imageLayout$)
featScale = Constant(0.00390625)
featScaled = Scale(featScale, features)
labels = InputValue(labelDim)
]
DNN=[
# conv1
kW1 = 5
kH1 = 5
cMap1 = 16
hStride1 = 1
vStride1 = 1
# weight[cMap1, kW1 * kH1 * inputChannels]
# Conv2DReLULayer is defined in Macros.ndl
conv1 = Conv2DReLULayer(featScaled, cMap1, 25, kW1, kH1, hStride1, vStride1, 10, 1)
# pool1
pool1W = 2
pool1H = 2
pool1hStride = 2
pool1vStride = 2
# MaxPooling is a standard NDL node.
pool1 = MaxPooling(conv1, pool1W, pool1H, pool1hStride, pool1vStride, imageLayout=$imageLayout$)
# conv2
kW2 = 5
kH2 = 5
cMap2 = 32
hStride2 = 1
vStride2 = 1
# weight[cMap2, kW2 * kH2 * cMap1]
# ConvNDReLULayer is defined in Macros.ndl
conv2 = ConvNDReLULayer(pool1, kW2, kH2, cMap1, 400, cMap2, hStride2, vStride2, 10, 1)
# pool2
pool2W = 2
pool2H = 2
pool2hStride = 2
pool2vStride = 2
# MaxNDPooling is defined in Macros.ndl
pool2 = MaxNDPooling(conv2, pool2W, pool2H, pool2hStride, pool2vStride, imageLayout=$imageLayout$)
h1Dim = 128
# DNNImageSigmoidLayer and DNNLayer are defined in Macros.ndl
h1 = DNNImageSigmoidLayer(7, 7, cMap2, h1Dim, pool2, 1)
ol = DNNLayer(h1Dim, labelDim, h1, 1)
ce = CrossEntropyWithSoftmax(labels, ol)
errs = ErrorPrediction(labels, ol)
# Special Nodes
FeatureNodes = (features)
LabelNodes = (labels)
CriterionNodes = (ce)
EvalNodes = (errs)
OutputNodes = (ol)
]
数据集:
可能是因为图片用的白纸上拍的,但白纸拍的也有阴影,清楚模糊的区别。总的来说这种图片数据可能还是好一些。error率为0%。
运行结果: