Up モデルの訓練 作成: 2021-04-23
更新: 2021-04-23


    トレーニング用データ:
      >>> train_data = packed_train_data.shuffle(500) >>> test_data = packed_test_data

    モデルの訓練
    ここで「モデルの訓練」と定めるものは, 「train_data に model を fit させる」である。 ──Model.fit メソッドは損失を最小化するようにモデルのパラメータを調整する。

    モデルの訓練の進行とともに、損失値と正解率が表示される
      >>> model.fit(train_data, epochs=20) WARNING:tensorflow:From /home/pi/venv/lib/python3.7/site-packages/ tensorflow_core/python/feature_column/feature_column_v2.py:4266: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version. Instructions for updating: The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead. WARNING:tensorflow:From /home/pi/venv/lib/python3.7/site-packages/ tensorflow_core/python/feature_column/feature_column_v2.py:4321: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version. Instructions for updating: The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead. Epoch 1/20 126/Unknown - 8s 66ms/step - loss: 0.4839 - acc: 0.7368 2021-04-23 12:40:34.532338: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 126/126 [==============================] - 8s 66ms/step - loss: 0.4839 - acc: 0.7368 Epoch 2/20 126/126 [==============================] - 2s 12ms/step - loss: 0.4148 - acc: 0.8214 Epoch 3/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3956 - acc: 0.8357 Epoch 4/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3834 - acc: 0.8341 Epoch 5/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3739 - acc: 0.8405 Epoch 6/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3658 - acc: 0.8437 Epoch 7/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3590 - acc: 0.8469 Epoch 8/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3538 - acc: 0.8517 Epoch 9/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3480 - acc: 0.8533 Epoch 10/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3420 - acc: 0.8533 Epoch 11/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3369 - acc: 0.8533 Epoch 12/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3323 - acc: 0.8517 Epoch 13/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3284 - acc: 0.8549 Epoch 14/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3245 - acc: 0.8565 Epoch 15/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3200 - acc: 0.8596 Epoch 16/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3163 - acc: 0.8628 Epoch 17/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3126 - acc: 0.8628 Epoch 18/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3096 - acc: 0.8612 Epoch 19/20 126/126 [==============================] - 2s 12ms/step - loss: 0.3059 - acc: 0.8612 Epoch 20/20 126/126 [==============================] - 2s 13ms/step - loss: 0.3031 - acc: 0.8612 <tensorflow.python.keras.callbacks.History object at 0x6414dfd0> >>>

    epochs は,学習の繰り返しの数。
    繰り返すごとに,正解率が全体的に上がっている。
    このモデルの場合、訓練用データでは 0.8612(すなわち86.12%)の正解率に達した。