#input_video = 0 # laptop camera
input_video = 1 # USB webcam
Rotation - has already been done on line 177
Horizontal and vertical flip - not good for our dataset (also, currently doing on line 179?)
featurewise_center
samplewise_center
featurewise_std_normalization
zca_epsilon
zca_whitening
shear_range
channel_shift_range
(base) masaaki@masaaki-H110M4-M01:/media/masaaki/Ubuntu_Disk/AI/dobble_buddy$ python dobble_tutorial.py
PARAMETERS:
Normalized shape of images : 224 x 224
Card Decks : 10 ['dobble_deck01_cards_57-augmented', 'dobble_deck02_cards_55', 'dobble_deck03_cards_55-augmented', 'dobble_deck04_cards_55', 'dobble_deck05_cards_55-augmented', 'dobble_deck06_cards_55', 'dobble_deck07_cards_55-augmented', 'dobble_deck08_cards_55', 'dobble_deck09_cards_55-augmented', 'dobble_deck10_cards_55']
TRAINING/VALIDATION DATA SETS:
Shape of training data (X) is : (22465, 224, 224, 3)
Shape of training data (y) is : (22465,)
Shape of validation data (X) is : (5617, 224, 224, 3)
Shape of validation data (y) is : (5617,)
2021-07-10 17:53:35.693689: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2021-07-10 17:53:36.012876: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3199980000 Hz
2021-07-10 17:53:36.057387: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x559761433360 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-10 17:53:36.057436: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-07-10 17:53:36.281905: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
MODEL SUMMARY:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 109, 109, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 52, 52, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 26, 26, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
dropout (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 18432) 0
_________________________________________________________________
dense (Dense) (None, 512) 9437696
_________________________________________________________________
activation (Activation) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 58) 29754
_________________________________________________________________
activation_1 (Activation) (None, 58) 0
=================================================================
Total params: 9,708,282
Trainable params: 9,708,282
Non-trainable params: 0
_________________________________________________________________
TRAIN MODEL:
Epoch 1/59
702/702 [==============================] - 649s 925ms/step - loss: 3.3797 - val_loss: 2.4933
Epoch 2/59
702/702 [==============================] - 619s 881ms/step - loss: 1.8767 - val_loss: 1.2642
Epoch 3/59
702/702 [==============================] - 620s 883ms/step - loss: 0.9784 - val_loss: 0.6677
Epoch 4/59
702/702 [==============================] - 623s 888ms/step - loss: 0.5961 - val_loss: 0.4000
Epoch 5/59
702/702 [==============================] - 619s 882ms/step - loss: 0.5095 - val_loss: 0.5559
Epoch 6/59
702/702 [==============================] - 621s 884ms/step - loss: 0.4237 - val_loss: 0.2355
Epoch 7/59
702/702 [==============================] - 618s 880ms/step - loss: 0.2760 - val_loss: 0.2104
Epoch 8/59
702/702 [==============================] - 618s 880ms/step - loss: 0.2350 - val_loss: 0.1733
Epoch 9/59
702/702 [==============================] - 617s 879ms/step - loss: 0.2408 - val_loss: 0.1829
Epoch 10/59
702/702 [==============================] - 614s 875ms/step - loss: 0.2283 - val_loss: 0.3288
Epoch 11/59
702/702 [==============================] - 613s 874ms/step - loss: 0.2097 - val_loss: 0.1317
Epoch 12/59
702/702 [==============================] - 616s 877ms/step - loss: 0.1609 - val_loss: 0.1008
Epoch 13/59
702/702 [==============================] - 615s 876ms/step - loss: 0.1632 - val_loss: 0.1554
Epoch 14/59
702/702 [==============================] - 627s 894ms/step - loss: 0.1713 - val_loss: 0.0993
Epoch 15/59
702/702 [==============================] - 622s 886ms/step - loss: 0.1276 - val_loss: 0.2646
Epoch 16/59
702/702 [==============================] - 617s 878ms/step - loss: 0.1852 - val_loss: 0.1097
Epoch 17/59
702/702 [==============================] - 619s 882ms/step - loss: 0.1387 - val_loss: 0.1637
Epoch 18/59
702/702 [==============================] - 617s 878ms/step - loss: 0.1229 - val_loss: 0.1576
Epoch 19/59
702/702 [==============================] - 616s 877ms/step - loss: 0.1321 - val_loss: 0.1307
Epoch 20/59
702/702 [==============================] - 617s 879ms/step - loss: 0.1246 - val_loss: 0.0790
Epoch 21/59
702/702 [==============================] - 614s 875ms/step - loss: 0.1165 - val_loss: 0.0906
Epoch 22/59
702/702 [==============================] - 614s 875ms/step - loss: 0.1205 - val_loss: 0.1210
Epoch 23/59
702/702 [==============================] - 614s 875ms/step - loss: 0.1106 - val_loss: 0.0839
Epoch 24/59
702/702 [==============================] - 614s 875ms/step - loss: 0.0977 - val_loss: 0.0636
Epoch 25/59
702/702 [==============================] - 616s 877ms/step - loss: 0.1171 - val_loss: 0.1314
Epoch 26/59
702/702 [==============================] - 634s 903ms/step - loss: 0.1127 - val_loss: 0.0733
Epoch 27/59
702/702 [==============================] - 635s 904ms/step - loss: 0.1047 - val_loss: 0.0715
Epoch 28/59
702/702 [==============================] - 617s 880ms/step - loss: 0.1182 - val_loss: 0.0712
Epoch 29/59
702/702 [==============================] - 635s 904ms/step - loss: 0.0948 - val_loss: 0.0857
Epoch 30/59
702/702 [==============================] - 618s 881ms/step - loss: 0.1238 - val_loss: 0.0927
Epoch 31/59
702/702 [==============================] - 617s 878ms/step - loss: 0.0966 - val_loss: 0.0701
Epoch 32/59
702/702 [==============================] - 617s 879ms/step - loss: 0.0970 - val_loss: 0.0876
Epoch 33/59
702/702 [==============================] - 617s 880ms/step - loss: 0.1322 - val_loss: 0.0762
Epoch 34/59
702/702 [==============================] - 617s 878ms/step - loss: 0.0835 - val_loss: 0.0815
Epoch 35/59
702/702 [==============================] - 617s 879ms/step - loss: 0.1001 - val_loss: 0.0716
Epoch 36/59
702/702 [==============================] - 616s 878ms/step - loss: 0.1000 - val_loss: 0.0888
Epoch 37/59
702/702 [==============================] - 618s 880ms/step - loss: 0.1183 - val_loss: 0.0640
Epoch 38/59
702/702 [==============================] - 618s 880ms/step - loss: 0.1058 - val_loss: 0.0871
Epoch 39/59
702/702 [==============================] - 617s 878ms/step - loss: 0.1179 - val_loss: 0.0759
Epoch 40/59
702/702 [==============================] - 617s 879ms/step - loss: 0.1015 - val_loss: 0.1003
Epoch 41/59
702/702 [==============================] - 617s 879ms/step - loss: 0.1082 - val_loss: 0.0679
Epoch 42/59
702/702 [==============================] - 617s 880ms/step - loss: 0.0968 - val_loss: 0.0693
Epoch 43/59
702/702 [==============================] - 616s 878ms/step - loss: 0.1161 - val_loss: 0.1391
Epoch 44/59
702/702 [==============================] - 616s 877ms/step - loss: 0.0920 - val_loss: 0.0960
Epoch 45/59
702/702 [==============================] - 618s 880ms/step - loss: 0.0870 - val_loss: 0.1002
Epoch 46/59
702/702 [==============================] - 616s 878ms/step - loss: 0.1016 - val_loss: 0.1532
Epoch 47/59
702/702 [==============================] - 614s 874ms/step - loss: 0.0802 - val_loss: 0.0577
Epoch 48/59
702/702 [==============================] - 615s 876ms/step - loss: 0.0731 - val_loss: 0.0723
Epoch 49/59
702/702 [==============================] - 614s 875ms/step - loss: 0.0901 - val_loss: 0.0977
Epoch 50/59
702/702 [==============================] - 616s 877ms/step - loss: 0.0947 - val_loss: 0.1096
Epoch 51/59
702/702 [==============================] - 615s 876ms/step - loss: 0.0747 - val_loss: 0.0812
Epoch 52/59
702/702 [==============================] - 615s 876ms/step - loss: 0.0922 - val_loss: 0.1069
Epoch 53/59
702/702 [==============================] - 615s 876ms/step - loss: 0.1031 - val_loss: 0.0655
Epoch 54/59
702/702 [==============================] - 615s 877ms/step - loss: 0.0882 - val_loss: 0.0971
Epoch 55/59
702/702 [==============================] - 615s 876ms/step - loss: 0.1150 - val_loss: 0.0693
Epoch 56/59
702/702 [==============================] - 614s 875ms/step - loss: 0.0963 - val_loss: 0.0707
Epoch 57/59
702/702 [==============================] - 615s 877ms/step - loss: 0.0993 - val_loss: 0.0638
Epoch 58/59
702/702 [==============================] - 616s 877ms/step - loss: 0.1011 - val_loss: 0.0984
Epoch 59/59
702/702 [==============================] - 615s 876ms/step - loss: 0.1028 - val_loss: 0.1870
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 109, 109, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 52, 52, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 26, 26, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
dropout (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 18432) 0
_________________________________________________________________
dense (Dense) (None, 512) 9437696
_________________________________________________________________
activation (Activation) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 58) 29754
_________________________________________________________________
activation_1 (Activation) (None, 58) 0
=================================================================
Total params: 9,708,282
Trainable params: 9,708,282
Non-trainable params: 0
_________________________________________________________________
Shape of test data (X) is : (12, 224, 224, 3)
Shape of test data (y) is : (12, 58)
EVALUATE MODEL:
1/1 [==============================] - 0s 266us/step - loss: 23.8451
./dobble_dataset/dobble_test01_cards : Test Accuracy = 0.9166666666666666
(base) masaaki@masaaki-H110M4-M01:/media/masaaki/Ubuntu_Disk/AI/dobble_buddy$ python dobble_test.py
2021-07-11 05:01:50.044420: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2021-07-11 05:01:50.136887: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3199980000 Hz
2021-07-11 05:01:50.137389: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563dae8954c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-11 05:01:50.137433: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-07-11 05:01:50.138320: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Shape of test data (X) is : (1267, 224, 224, 3)
Shape of test data (y) is : (1267, 58)
EVALUATE MODEL:
40/40 [==============================] - 9s 221ms/step - loss: 4.3736
./dobble_dataset/dobble_test02_cards : Test Accuracy = 0.6803472770323599
0.50% accuracy bound: 0.6716 - 0.6891
0.80% accuracy bound: 0.6636 - 0.6971
0.90% accuracy bound: 0.6589 - 0.7018
0.95% accuracy bound: 0.6547 - 0.7060
0.99% accuracy bound: 0.6465 - 0.7141
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 109, 109, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 52, 52, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 26, 26, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
dropout (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 18432) 0
_________________________________________________________________
dense (Dense) (None, 512) 9437696
_________________________________________________________________
activation (Activation) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 58) 29754
_________________________________________________________________
activation_1 (Activation) (None, 58) 0
=================================================================
Total params: 9,708,282
Trainable params: 9,708,282
Non-trainable params: 0
_________________________________________________________________
(base) masaaki@masaaki-H110M4-M01:/media/masaaki/Ubuntu_Disk/AI/dobble_buddy$ python dobble_tutorial.py
PARAMETERS:
Normalized shape of images : 224 x 224
Card Decks : 10 ['dobble_deck01_cards_57', 'dobble_deck02_cards_55', 'dobble_deck03_cards_55', 'dobble_deck04_cards_55', 'dobble_deck05_cards_55', 'dobble_deck06_cards_55', 'dobble_deck07_cards_55', 'dobble_deck08_cards_55', 'dobble_deck09_cards_55', 'dobble_deck10_cards_55']
TRAINING/VALIDATION DATA SETS:
Shape of training data (X) is : (449, 224, 224, 3)
Shape of training data (y) is : (449,)
Shape of validation data (X) is : (113, 224, 224, 3)
Shape of validation data (y) is : (113,)
2021-07-09 04:58:02.724720: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2021-07-09 04:58:02.746653: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3199980000 Hz
2021-07-09 04:58:02.746905: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x555b709c3a20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-09 04:58:02.746924: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-07-09 04:58:02.746997: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
MODEL SUMMARY:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 109, 109, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 52, 52, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 26, 26, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
dropout (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 18432) 0
_________________________________________________________________
dense (Dense) (None, 512) 9437696
_________________________________________________________________
activation (Activation) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 58) 29754
_________________________________________________________________
activation_1 (Activation) (None, 58) 0
=================================================================
Total params: 9,708,282
Trainable params: 9,708,282
Non-trainable params: 0
_________________________________________________________________
TRAIN MODEL:
Epoch 1/59
14/14 [==============================] - 14s 975ms/step - loss: 4.1059 - val_loss: 4.0662
Epoch 2/59
14/14 [==============================] - 13s 959ms/step - loss: 4.0710 - val_loss: 4.1179
Epoch 3/59
14/14 [==============================] - 13s 936ms/step - loss: 4.0521 - val_loss: 4.1025
Epoch 4/59
14/14 [==============================] - 13s 939ms/step - loss: 4.0509 - val_loss: 4.1296
Epoch 5/59
14/14 [==============================] - 13s 948ms/step - loss: 4.0449 - val_loss: 4.0707
Epoch 6/59
14/14 [==============================] - 13s 950ms/step - loss: 4.0505 - val_loss: 4.1071
Epoch 7/59
14/14 [==============================] - 13s 958ms/step - loss: 4.0433 - val_loss: 4.1346
Epoch 8/59
14/14 [==============================] - 13s 954ms/step - loss: 4.0504 - val_loss: 4.0856
Epoch 9/59
14/14 [==============================] - 13s 964ms/step - loss: 4.0330 - val_loss: 4.1261
Epoch 10/59
14/14 [==============================] - 14s 983ms/step - loss: 3.9718 - val_loss: 3.9883
Epoch 11/59
14/14 [==============================] - 13s 933ms/step - loss: 3.7644 - val_loss: 4.0952
Epoch 12/59
14/14 [==============================] - 13s 933ms/step - loss: 3.6456 - val_loss: 3.6974
Epoch 13/59
14/14 [==============================] - 13s 951ms/step - loss: 3.3521 - val_loss: 3.5460
Epoch 14/59
14/14 [==============================] - 13s 962ms/step - loss: 3.2253 - val_loss: 3.2729
Epoch 15/59
14/14 [==============================] - 15s 1s/step - loss: 2.9422 - val_loss: 3.2000
Epoch 16/59
14/14 [==============================] - 13s 962ms/step - loss: 2.7437 - val_loss: 3.0561
Epoch 17/59
14/14 [==============================] - 13s 962ms/step - loss: 2.6164 - val_loss: 3.0905
Epoch 18/59
14/14 [==============================] - 13s 950ms/step - loss: 2.5613 - val_loss: 2.7422
Epoch 19/59
14/14 [==============================] - 13s 948ms/step - loss: 2.5242 - val_loss: 2.9230
Epoch 20/59
14/14 [==============================] - 13s 954ms/step - loss: 2.3480 - val_loss: 2.6601
Epoch 21/59
14/14 [==============================] - 14s 1s/step - loss: 2.1049 - val_loss: 2.5116
Epoch 22/59
14/14 [==============================] - 14s 967ms/step - loss: 1.9024 - val_loss: 2.4045
Epoch 23/59
14/14 [==============================] - 13s 944ms/step - loss: 1.7881 - val_loss: 2.3397
Epoch 24/59
14/14 [==============================] - 13s 935ms/step - loss: 1.6218 - val_loss: 2.3310
Epoch 25/59
14/14 [==============================] - 13s 960ms/step - loss: 1.4528 - val_loss: 1.9856
Epoch 26/59
14/14 [==============================] - 13s 950ms/step - loss: 1.4757 - val_loss: 1.9892
Epoch 27/59
14/14 [==============================] - 13s 947ms/step - loss: 1.2487 - val_loss: 1.8838
Epoch 28/59
14/14 [==============================] - 13s 945ms/step - loss: 1.0968 - val_loss: 1.8848
Epoch 29/59
14/14 [==============================] - 14s 999ms/step - loss: 1.3052 - val_loss: 1.8807
Epoch 30/59
14/14 [==============================] - 13s 942ms/step - loss: 1.1706 - val_loss: 1.8130
Epoch 31/59
14/14 [==============================] - 13s 947ms/step - loss: 0.9345 - val_loss: 1.4584
Epoch 32/59
14/14 [==============================] - 13s 934ms/step - loss: 1.0239 - val_loss: 1.6938
Epoch 33/59
14/14 [==============================] - 13s 942ms/step - loss: 0.8465 - val_loss: 1.0062
Epoch 34/59
14/14 [==============================] - 13s 947ms/step - loss: 0.7187 - val_loss: 1.5815
Epoch 35/59
14/14 [==============================] - 13s 946ms/step - loss: 0.6261 - val_loss: 1.3732
Epoch 36/59
14/14 [==============================] - 13s 944ms/step - loss: 0.5681 - val_loss: 1.6571
Epoch 37/59
14/14 [==============================] - 13s 961ms/step - loss: 0.8443 - val_loss: 0.8410
Epoch 38/59
14/14 [==============================] - 13s 945ms/step - loss: 0.5554 - val_loss: 1.3257
Epoch 39/59
14/14 [==============================] - 13s 933ms/step - loss: 0.5168 - val_loss: 1.3477
Epoch 40/59
14/14 [==============================] - 13s 938ms/step - loss: 0.4323 - val_loss: 1.2225
Epoch 41/59
14/14 [==============================] - 13s 917ms/step - loss: 0.4062 - val_loss: 1.7126
Epoch 42/59
14/14 [==============================] - 13s 931ms/step - loss: 0.4107 - val_loss: 0.6231
Epoch 43/59
14/14 [==============================] - 14s 1s/step - loss: 0.4382 - val_loss: 1.3422
Epoch 44/59
14/14 [==============================] - 13s 937ms/step - loss: 0.3241 - val_loss: 1.4085
Epoch 45/59
14/14 [==============================] - 13s 933ms/step - loss: 0.2740 - val_loss: 0.6285
Epoch 46/59
14/14 [==============================] - 13s 941ms/step - loss: 0.2144 - val_loss: 1.7058
Epoch 47/59
14/14 [==============================] - 14s 967ms/step - loss: 0.4136 - val_loss: 1.6700
Epoch 48/59
14/14 [==============================] - 13s 953ms/step - loss: 0.2629 - val_loss: 1.4909
Epoch 49/59
14/14 [==============================] - 14s 984ms/step - loss: 0.2949 - val_loss: 1.4362
Epoch 50/59
14/14 [==============================] - 14s 1s/step - loss: 0.1808 - val_loss: 1.5405
Epoch 51/59
14/14 [==============================] - 14s 1s/step - loss: 0.1484 - val_loss: 1.7669
Epoch 52/59
14/14 [==============================] - 14s 966ms/step - loss: 0.1679 - val_loss: 1.4220
Epoch 53/59
14/14 [==============================] - 13s 940ms/step - loss: 0.1705 - val_loss: 1.3934
Epoch 54/59
14/14 [==============================] - 14s 967ms/step - loss: 0.1828 - val_loss: 0.5581
Epoch 55/59
14/14 [==============================] - 14s 967ms/step - loss: 0.2769 - val_loss: 1.3616
Epoch 56/59
14/14 [==============================] - 14s 974ms/step - loss: 0.1672 - val_loss: 0.5885
Epoch 57/59
14/14 [==============================] - 14s 969ms/step - loss: 0.1644 - val_loss: 1.4092
Epoch 58/59
14/14 [==============================] - 13s 950ms/step - loss: 0.1321 - val_loss: 1.7724
Epoch 59/59
14/14 [==============================] - 13s 931ms/step - loss: 0.1618 - val_loss: 0.8644
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 222, 222, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 111, 111, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 109, 109, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 54, 54, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 52, 52, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 26, 26, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
dropout (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 18432) 0
_________________________________________________________________
dense (Dense) (None, 512) 9437696
_________________________________________________________________
activation (Activation) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 58) 29754
_________________________________________________________________
activation_1 (Activation) (None, 58) 0
=================================================================
Total params: 9,708,282
Trainable params: 9,708,282
Non-trainable params: 0
_________________________________________________________________
Shape of test data (X) is : (12, 224, 224, 3)
Shape of test data (y) is : (12, 58)
EVALUATE MODEL:
1/1 [==============================] - 0s 259us/step - loss: 0.9788
./dobble_dataset/dobble_test01_cards : Test Accuracy = 0.9166666666666666
(base) masaaki@masaaki-H110M4-M01:/media/masaaki/Ubuntu_Disk/AI/dobble_buddy$ python dobble_test.py
2021-07-09 05:12:45.148697: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2021-07-09 05:12:45.170665: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3199980000 Hz
2021-07-09 05:12:45.170934: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55eaef3a9300 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-07-09 05:12:45.170993: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-07-09 05:12:45.171209: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Shape of test data (X) is : (1267, 224, 224, 3)
Shape of test data (y) is : (1267, 58)
EVALUATE MODEL:
40/40 [==============================] - 8s 209ms/step - loss: 6.6324
./dobble_dataset/dobble_test02_cards : Test Accuracy = 0.4617205998421468
0.50% accuracy bound: 0.4523 - 0.4711
0.80% accuracy bound: 0.4438 - 0.4796
0.90% accuracy bound: 0.4388 - 0.4847
0.95% accuracy bound: 0.4343 - 0.4892
0.99% accuracy bound: 0.4256 - 0.4979
ゲームは55枚のカードのデッキを使用し、それぞれに8つの異なるシンボルが印刷されています。2枚のカードは常に1つだけの一致するシンボルを共有します。ゲームの目的は、与えられた2枚のカードに共通のシンボルを最初に発表することです。
opencv-contrib-python
tensorflow
keras
kaggle
Traceback (most recent call last):
File "/home/masaaki/anaconda3/bin/conda", line 7, in <module>
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
KerasJson: example-keras-model-files/keras_curve_cnn2_line.json
KerasH5: example-keras-model-files/keras_curve_cnn2_line_weights.h5
OutputDir: keras_curve_cnn2_line
ProjectName: keras_curve_cnn2_line
XilinxPart: xcku5p-sfvb784-1-e
ClockPeriod: 5
IOType: io_parallel # options: io_serial/io_parallel
HLSConfig:
Model:
Precision: ap_fixed<16,6>
ReuseFactor: 1
ERROR: [XFORM 203-504] Stop unrolling loop 'Product1' (/home/masaaki/DNN/hls4ml/nnet_utils/nnet_dense.h:97) in function 'nnet::dense
, ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, config6>' because it may cause large runtime and excessive memory usage due to increase in code size. Please avoid unrolling the loop or form sub-functions for code in the loop body.
ERROR: [HLS 200-70] Pre-synthesis failed.
command 'ap_source' returned error code
while executing
"source build_prj.tcl"
("uplevel" body line 1)
invoked from within
"uplevel \#0 [list source $arg] "
# 学習済みモデルの読み込み
from keras.models import load_model
model = load_model('keras_curve_cnn2_line.h5')
# save as JSON
json_string = model.to_json()
json_name='keras_curve_cnn2_line.json'
with open(json_name, mode='w') as f:
f.write(json_string)
model.save_weights('keras_curve_cnn2_line_weights.h5')
日 | 月 | 火 | 水 | 木 | 金 | 土 |
---|---|---|---|---|---|---|
- | - | - | - | - | 1 | 2 |
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |