Asked  2 Years ago    Answers:  5   Viewed   4.7k times

I am trying to train my own custom object detector using Tensorflow Object-Detection-API

I installed the tensorflow using "pip install tensorflow" in my google compute engine. Then I followed all the instructions on this site:

When I try to use I am getting this error message:

Traceback (most recent call last): File "", line 49, in from import dataset_builder File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1->py3.6.egg/object_detection/builders/", line 27, in from object_detection.data_decoders import tf_example_decoder File "/usr/local/lib/python3.6/dist-packages/object_detection-0.1-py3.6.egg/object_detection/data_decoders/", line 27, in slim_example_decoder = tf.contrib.slim.tfexample_decoder AttributeError: module 'tensorflow' has no attribute 'contrib'

Also I am getting different results when I try to learn version of tensorflow.

python3 -c 'import tensorflow as tf; print(tf.version)' : 2.0.0-dev20190422

and when I use

pip3 show tensorflow:

Name: tensorflow Version: 1.13.1 Summary: TensorFlow is an open source machine learning framework for everyone. Home-page: Author: Google Inc. Author-email: [email protected] License: Apache 2.0 Location: /usr/local/lib/python3.6/dist-packages Requires: gast, astor, absl-py, tensorflow-estimator, keras-preprocessing, grpcio, six, keras-applications, wheel, numpy, tensorboard, protobuf, termcolor Required-by:

    sudo python3 --logtostderr --train_dir=training/ -- 

What should I do to solve this problem? I couldn't find anything about this error message except this: tensorflow 'module' object has no attribute 'contrib'



tf.contrib has moved out of TF starting TF 2.0 alpha.
Take a look at these tf 2.0 release notes
You can upgrade your TF 1.x code to TF 2.x using the tf_upgrade_v2 script

Sunday, August 21, 2022

You normally import tensorflow by writing,

import tensorflow as tf

It's possible that you have named a file in your project and the import statement is importing from this file.

Alternatively, you can try this,

from tensorflow.python.framework import ops
Wednesday, December 14, 2022

I was unable to reproduce with the same versions of the keras and tensorflow, reinstalling keras and tensorflow, may solve the issue, please use commands below:

pip install --upgrade pip setuptools wheel
pip install -I tensorflow
pip install -I keras

NOTE: The -I parameter stands for ignore installed package.

Tuesday, November 22, 2022

Hope you have Saved the Estimator Model using the code similar to that mentioned below:

input_column = tf.feature_column.numeric_column("x")
estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])

def input_fn():
    ({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)

serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
export_path = estimator.export_saved_model(
  "/tmp/from_estimator/", serving_input_fn)

You can Load the Model using the code mentioned below:

imported = tf.saved_model.load(export_path)

To Predict using your Model by passing the Input Features, you can use the below code:

def predict(x):
  example = tf.train.Example()
  return imported.signatures["predict"](examples=tf.constant([example.SerializeToString()]))


For more details, please refer this link in which Saved Models using TF Estimator are explained.

Sunday, August 28, 2022

There might be a number of possibilities that can lead to the issue.

1- The ops used in python are not used in the same manner in both js and python. If that is the case, using exactly the same ops will get rid of the issue.

2- The tensors image might be read differently by the python library and the browser canvas. Actually, accross browsers the canvas pixel don't always have the same value due to some operations like anti-aliasing, etc ... as explained in this answer. So there might be some slight differences in the result of the operations. To make sure that this is the root cause of the issue, first try to print the python and the js array image and see if they are alike. It is likely that the 3d tensor is different in js and python.

tensor3d = tf.tensor3d(image,[height,width,1],'float32')

In this case, instead of reading directly the image in the browser, one can use the python library to convert image to array of tensor. And use tfjs to read directly this array instead of the image. That way, the input tensors will be the same both for in js and in python.

3 - it is a float32 precision issue. tensor3d is created with the dtype float32 and depending on the operations used, there might be a precision issue. Consider this operation:

tf.scalar(12045, 'int32').mul(tf.scalar(12045, 'int32')).print(); // 145082032 instead of 145082025

The same precision issue will be encountered in python with the following:

a = tf.constant([12045], dtype='float32') * tf.constant([12045], dtype='float32')
tf.print(a) // 145082032

In python this can be solved by using int32 dtype. However because of the webgl float32 limitation the same thing can't be done using the webgl backend on tfjs. In neural networks, this precision issue is not a great deal. To get rid of it, one can change the backend using setBackend('cpu') for instance which is much slower.

Monday, December 26, 2022
Only authorized users can answer the search term. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :

Browse Other Code Languages