Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Dimensions 23 and 1 are not compatible #2

Open
gaceladri opened this issue Apr 25, 2019 · 1 comment
Open

ValueError: Dimensions 23 and 1 are not compatible #2

gaceladri opened this issue Apr 25, 2019 · 1 comment

Comments

@gaceladri
Copy link

Hello,

First of all... thanks a lot for this interesting work! I am trying to use it in a normal classification problem. I would like to check if it is competitive on normal problems apart of this quadratic implementation that you have on your notebook.

I am trying to use this optimization with a TextCNN architecture, where...

reg_loss = tf.losses.get_regularization_loss(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.reduce_mean(per_example_loss) + reg_loss

if mode == tf.estimator.ModeKeys.TRAIN:
        #optimizer = select_optimizer(hparams)        
        opt = tf.train.GradientDescentOptimizer(0.2)
        gav = opt.compute_gradients(loss)
        optimizer = guided_es(loss, gav, sigma=1.0, alpha=0.5, beta=2.0)
        train_optimizer = opt.apply_gradients(optimizer)
        return tf.estimator.EstimatorSpec(mode=mode,
                                          loss=loss, 
                                          train_op=train_optimizer)

And I am getting the next error:

<ipython-input-6-9fee732f4a6d> in textCNN_per_label_loss(features, labels, mode, params)
     76         opt = tf.train.GradientDescentOptimizer(0.2)
     77         gav = opt.compute_gradients(loss)
---> 78         optimizer = guided_es(loss, gav, sigma=1.0, alpha=0.5, beta=2.0)
     79         train_optimizer = opt.apply_gradients(optimizer)
     80         return tf.estimator.EstimatorSpec(mode=mode,

<ipython-input-5-8fa40c3c8a5c> in guided_es(loss_fn, grads_and_vars, sigma, alpha, beta)
     63                            scale_perturb_diag=perturb_diag)
     64 
---> 65     dists = {v.op.name: vardist(g, v) for g, v in grads_and_vars}
     66 
     67     # antithetic getter

<ipython-input-5-8fa40c3c8a5c> in <dictcomp>(.0)
     63                            scale_perturb_diag=perturb_diag)
     64 
---> 65     dists = {v.op.name: vardist(g, v) for g, v in grads_and_vars}
     66 
     67     # antithetic getter

<ipython-input-5-8fa40c3c8a5c> in vardist(grad, variable)
     61         return mvn_lowrank(scale_diag=scale_diag,
     62                            scale_perturb_factor=perturb_factor,
---> 63                            scale_perturb_diag=perturb_diag)
     64 
     65     dists = {v.op.name: vardist(g, v) for g, v in grads_and_vars}

~\.conda\envs\tensor\lib\site-packages\tensorflow\python\util\deprecation.py in new_func(*args, **kwargs)
    304               'in a future version' if date is None else ('after %s' % date),
    305               instructions)
--> 306       return func(*args, **kwargs)
    307     return tf_decorator.make_decorator(
    308         func, new_func, 'deprecated',

~\.conda\envs\tensor\lib\site-packages\tensorflow\contrib\distributions\python\ops\mvn_diag_plus_low_rank.py in __init__(self, loc, scale_diag, scale_identity_multiplier, scale_perturb_factor, scale_perturb_diag, validate_args, allow_nan_stats, name)
    256               is_self_adjoint=True,
    257               is_positive_definite=True,
--> 258               is_square=True)
    259     super(MultivariateNormalDiagPlusLowRank, self).__init__(
    260         loc=loc,

~\.conda\envs\tensor\lib\site-packages\tensorflow\python\ops\linalg\linear_operator_low_rank_update.py in __init__(self, base_operator, u, diag_update, v, is_diag_update_positive, is_non_singular, is_self_adjoint, is_positive_definite, is_square, name)
    259       self._is_diag_update_positive = is_diag_update_positive
    260 
--> 261       self._check_shapes()
    262 
    263       # Pre-compute the so-called "capacitance" matrix

~\.conda\envs\tensor\lib\site-packages\tensorflow\python\ops\linalg\linear_operator_low_rank_update.py in _check_shapes(self)
    280 
    281     if self._diag_update is not None:
--> 282       uv_shape[-1].assert_is_compatible_with(self._diag_update.get_shape()[-1])
    283       array_ops.broadcast_static_shape(
    284           batch_shape, self._diag_update.get_shape()[:-1])

~\.conda\envs\tensor\lib\site-packages\tensorflow\python\framework\tensor_shape.py in assert_is_compatible_with(self, other)
    114     if not self.is_compatible_with(other):
    115       raise ValueError("Dimensions %s and %s are not compatible" % (self,
--> 116                                                                     other))
    117 
    118   def merge_with(self, other):

ValueError: Dimensions 23 and 1 are not compatible

I am a little bit missed here. First of all, it is applicable to normal classification problems? and if so... Do you have any idea about what could be the problem?

Thanks a lot!
Best regards,

@Boutheina02
Copy link

i have the same probleme please can you help me if you find a solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants