Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some questions of environment error and refG's overIoU in the paper #12

Open
xiejialong opened this issue Mar 29, 2023 · 2 comments
Open

Comments

@xiejialong
Copy link

Hello!
Recently, I have finished some experiments, the refG's overIoU of test(u) is lower than the VLT in our experiments, but I find the overIoU of test(u) is much higher than other algorithms. Thus, I have tried to run your code. Unfortunately, the code gave an error, and no workaround was found. The reproduction process is as follows :
step1:
git clone https://github.com/henghuiding/Vision-Language-Transformer
step 2:
conda create -n vlt python=3.6
conda activate vlt
pip install -r requirement.txt
step 3:
python vlt.py test pretrain/config.yaml

Errors:
File "vlt.py", line 57, in
tester = Tester(config, GPUS=GPU_COUNTS, debug=args.debug)
File "/home/users3/workspace/Vision-Language-Transformer/executor.py", line 184, in init
super(Tester, self).init(config, **kwargs)
File "/home/users3/workspace/Vision-Language-Transformer/executor.py", line 39, in init
self.yolo_model, self.yolo_body, self.yolo_body_single = self.create_model()
File "/home/users3/workspace/Vision-Language-Transformer/executor.py", line 54, in create_model
model_body = yolo_body(image_input, q_input, self.config)
File "/home/users3/workspace/Vision-Language-Transformer/model/vlt_model.py", line 162, in yolo_body
mask_out = make_multitask_braches(Fv, fq, fq_word, config)
File "/home/users3/workspace/Vision-Language-Transformer/model/vlt_model.py", line 79, in make_multitask_braches
mask_out = vlt_transformer(F_tf, fq_word, query_out, config)
File "/home/users3/workspace/Vision-Language-Transformer/model/vlt_model.py", line 100, in vlt_transformer
head_num=config.transformer_head_num)
File "/home/users3/workspace/Vision-Language-Transformer/model/transfromer_model.py", line 79, in lang_tf_enc
)(lang_input)
File "/home/users3/anaconda3/envs/vlt/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 881, in call
inputs, outputs, args, kwargs)
File "/home/users3/anaconda3/envs/vlt/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 2043, in set_connectivity_metadata
input_tensors=inputs, output_tensors=outputs, arguments=arguments)
File "/home/users3/anaconda3/envs/vlt/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 2059, in _add_inbound_node
input_tensors)
File "/home/users3/anaconda3/envs/vlt/lib/python3.6/site-packages/tensorflow_core/python/util/nest.py", line 536, in map_structure
structure[0], [func(*x) for x in entries],
File "/home/users3/anaconda3/envs/vlt/lib/python3.6/site-packages/tensorflow_core/python/util/nest.py", line 536, in
structure[0], [func(*x) for x in entries],
File "/home/users3/anaconda3/envs/vlt/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 2058, in
inbound_layers = nest.map_structure(lambda t: t._keras_history.layer,
AttributeError: 'tuple' object has no attribute 'layer'

If it is convenient, I would be grateful if you could provide the logs of your code running and testing on refG, or more detailed environment configuration, thanks.

@Shuaicong97
Copy link

Hello,

How are you going? I saw in Step 2 that your Python version is 3.6. So your reproduction is based entirely on the version in the requirement.txt? Not the latest version, like TensorFlow 2.13.0 and so on. Because I continually met some problems due to the version differences. And some of them I can't find a solution, so I'm thinking to create an exact virtual environment as the repository. Even it's better to use the latest version...

Thank you in advance.

@xiejialong
Copy link
Author

@Shuaicong24, we do the environment configuration exactly as requirement.txt. Now, we found the problem stems from the Keras package and Keras module of TensorFlow. The original code used a mix of them, resulting in API obfuscation. If you want to succeed, one of them must be replaced by another.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants