ValueError: The `kernel_size` argument must be a tuple of 2 integers.

python

ソースコード def discriminator(self, x, reuse=tf.compat.v1.AUTO_REUSE, training=True): depth = 64 dropout = 0.4 kernel_size = (4, 2) with tf.compat.v1.variable_scope('Discriminator', reuse=reuse): x = tf.keras.layers.Conv2D(x, (depth * 1, kernel_size), strides=(2, 1), padding='same') x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) x = tf.keras.layers.Dropout(x, dropout, training=training) x = tf.keras.layers.Conv2D(x, (depth * 2, kernel_size), strides=(2, 1), padding='same') x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) x = tf.keras.layers.dropout(x, dropout, training=training) x = tf.keras.layers.Conv2D(x, (depth * 4, kernel_size), strides=(2, 1), padding='same') x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) x = tf.keras.layers.dropout(x, dropout, training=training) x = tf.keras.layers.Conv2D(x, (depth * 8, kernel_size), strides=(2, 1), padding='same') x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) x = tf.keras.layers.dropout(x, dropout, training=training) x = tf.keras.layers.Conv2D(x, (depth * 16, kernel_size), strides=(2, 1), padding='same') x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) x = tf.keras.layers.dropout(x, dropout, training=training) x = tf.keras.layers.Conv2D(x, (depth * 32, kernel_size), strides=(2, 1), padding='same') x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) x = tf.keras.layers.dropout(x, dropout, training=training) x = tf.keras.layers.flatten(x) x = tf.keras.layers.dense(x, 1024) x = tf.keras.layers.BatchNormalization(x, momentum=0.9) # , training=training) x = tf.nn.leaky_relu(x, alpha=0.2) d = tf.keras.layers.dense(x, 1) q = tf.keras.layers.dense(x, 128) q = tf.keras.layers.BatchNormalization(q, momentum=0.9) # , training=training) q = tf.nn.leaky_relu(q, alpha=0.2) q_mean = tf.keras.layers.Dense(q, self.latent_dim) q_logstd = tf.keras.layers.Dense(q, self.latent_dim) q_logstd = tf.maximum(q_logstd, -16) # Reshape to batch_size x 1 x latent_dim q_mean = tf.reshape(q_mean, (-1, 1, self.latent_dim)) q_logstd = tf.reshape(q_logstd, (-1, 1, self.latent_dim)) q = tf.concat([q_mean, q_logstd], axis=1, name='predicted_latent') # batch_size x 2 x latent_dim return d, q

コメントを投稿

0 コメント