Viewed   4.7k times

I am trying to train my own custom object detector using Tensorflow Object-Detection-API

I installed the tensorflow using "pip install tensorflow" in my google compute engine. Then I followed all the instructions on this site: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html

When I try to use train.py I am getting this error message:

Traceback (most recent call last): File "train.py", line 49, in from object_detection.builders import dataset_builder File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1->py3.6.egg/object_detection/builders/dataset_builder.py", line 27, in from object_detection.data_decoders import tf_example_decoder File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/data_decoders/tf_example_decoder.py", line 27, in slim_example_decoder = tf.contrib.slim.tfexample_decoder AttributeError: module 'tensorflow' has no attribute 'contrib'

Also I am getting different results when I try to learn version of tensorflow.

python3 -c 'import tensorflow as tf; print(tf.version)' : 2.0.0-dev20190422

and when I use

pip3 show tensorflow:

Name: tensorflow Version: 1.13.1 Summary: TensorFlow is an open source machine learning framework for everyone. Home-page: https://www.tensorflow.org/ Author: Google Inc. Author-email: opensource@google.com License: Apache 2.0 Location: /usr/local/lib/python3.6/dist-packages Requires: gast, astor, absl-py, tensorflow-estimator, keras-preprocessing, grpcio, six, keras-applications, wheel, numpy, tensorboard, protobuf, termcolor Required-by:

    sudo python3 train.py --logtostderr --train_dir=training/ -- 
    pipeline_config_path=training/ssd_inception_v2_coco.config

What should I do to solve this problem? I couldn't find anything about this error message except this: tensorflow 'module' object has no attribute 'contrib'

 Answers

3

tf.contrib has moved out of TF starting TF 2.0 alpha.
Take a look at these tf 2.0 release notes https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0
You can upgrade your TF 1.x code to TF 2.x using the tf_upgrade_v2 script https://www.tensorflow.org/alpha/guide/upgrade

Sunday, August 21, 2022
 
3vi1
 
1

According to TF 1:1 Symbols Map, in TF 2.0 you should use tf.compat.v1.Session() instead of tf.Session()

https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0

To get TF 1.x like behaviour in TF 2.0 one can run

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

but then one cannot benefit of many improvements made in TF 2.0. For more details please refer to the migration guide https://www.tensorflow.org/guide/migrate

Saturday, December 17, 2022
 
5

At present there are three main options, which have different usability and performance trade-offs:

  1. In the Dataset.batch() transform, create a single large batch containing examples for all of your GPUs. Then use tf.split(..., self.num_gpus) on the output of Iterator.get_next() to create sub-batches for each GPU. This is probably the easiest approach, but it does place the splitting on the critical path.

  2. In the Dataset.batch() transform, create a mini-batch that is sized for a single GPU. Then call Iterator.get_next() once per GPU to get multiple different batches. (By contrast, in your current code, the same value of next_batch is sent to each GPU, which is probably not what you wanted to happen.)

  3. Create multiple iterators, one per GPU. Shard the data using Dataset.shard() early in the pipeline (e.g. on the list of files if your dataset is sharded). Note that this approach will consume more resources on the host, so you may need to dial down any buffer sizes and/or degrees of parallelism

Note that the current tf.data pipelines run on the CPU only, and an important aspect of an efficient pipeline is staging your training input to the GPU while the previous step is still running. See the TensorFlow CNN benchmarks for example code that shows how to stage data to GPUs efficiently. We are currently working on adding this support to the tf.data API directly.

Thursday, November 24, 2022
 
2

You can find out what tfjs format you have by looking in the json file. It often says "graph-model". The difference between them are here.

From tfjs graph model to SavedModel (more common)

Use tfjs-to-tf by Patrick Levin.

import tfjs_graph_converter.api as tfjs
tfjs.graph_model_to_saved_model(
               "savedmodel/posenet/mobilenet/float/050/model-stride16.json",
               "realsavedmodel"
            )

# Code below taken from https://www.tensorflow.org/lite/convert/python_api
converter = tf.lite.TFLiteConverter.from_saved_model("realsavedmodel")
tflite_model = converter.convert()

# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
  f.write(tflite_model)

From tfjs layers model to SavedModel

Note: This will only work for layers model format, not graph model format as in the question. I've written the difference between them here.


  1. Install and use tensorflowjs-convert to convert the .json file into a Keras HDF5 file (from another SO thread).

On mac, you'll face issues running pyenv (fix) and on Z-shell, pyenv won't load correctly (fix). Also, once pyenv is running, use python -m pip install tensorflowjs instead of pip install tensorflowjs, because pyenv did not change python used by pip for me.

Once you've followed the tensorflowjs_converter guide, run tensorflowjs_converter to verify it works with no errors, and should just warn you about Missing input_path argument. Then:

tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras tfjs_model.json hdf5_keras_model.hdf5
  1. Convert the Keras HDF5 file into a SavedModel (standard Tensorflow model file) or directly into .tflite file using the TFLiteConverter. The following runs in a Python file:
# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert() 
    
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)

or to save to a SavedModel:

# Convert the model.
model = tf.keras.models.load_model('hdf5_keras_model.hdf5')
tf.keras.models.save_model(
    model, filepath, overwrite=True, include_optimizer=True, save_format=None,
    signatures=None, options=None
)
Tuesday, November 1, 2022
 
greed
 
3

Make42 is absolutely correct that the changes they describe in their answer must be made in order to migrate a codebase to work with TensorFlow 1.0. However, the errors you are seeing are in the Keras library itself. Fortunately, these errors have been fixed in the Keras codebase since January 2017, so upgrading to Keras 1.2.2 or later will fix the error for you.

Tuesday, September 6, 2022
Only authorized users can answer the search term. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :