Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

Conversation

@gilbertfrancois
Copy link
Contributor

@gilbertfrancois gilbertfrancois commented Oct 13, 2020

Fix the direction of in_channels -> out_channels in the repr function for ConvTranspose classes.

Co-authored-by: g4b1nagy gabrian.nagy@gmail.com

fixes #19338

* Fix the direction of in_channels -> out_channels in the repr function for ConvTranspose classes.

Co-authored-by: g4b1nagy <gabrian.nagy@gmail.com>
@gilbertfrancois gilbertfrancois requested a review from szha as a code owner October 13, 2020 10:44
@mxnet-bot
Copy link

Hey @gilbertfrancois , Thanks for submitting the PR
All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands:

  • To trigger all jobs: @mxnet-bot run ci [all]
  • To trigger specific jobs: @mxnet-bot run ci [job1, job2]

CI supported jobs: [sanity, windows-cpu, centos-gpu, windows-gpu, unix-gpu, unix-cpu, miscellaneous, edge, website, clang, centos-cpu]


Note:
Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin.
All CI tests must pass before the PR can be merged.

@gilbertfrancois
Copy link
Contributor Author

I made a small test to show its new behaviour. Note that the test is done on gpu, because on cpu, only 1D or 2D Deconvolution is supported.

import mxnet as mx


ctx = mx.gpu(0)

x_1d = mx.nd.random.randn(1, 1, 8, ctx=ctx)
x_2d = mx.nd.random.randn(1, 1, 8, 8, ctx=ctx)
x_3d = mx.nd.random.randn(1, 1, 8, 8, 8, ctx=ctx)

conv = mx.gluon.nn.Conv1D(in_channels=1, channels=2, kernel_size=3, strides=1)
conv.initialize(ctx=ctx)
y_1d = conv(x_1d)
mx.nd.waitall()
print(conv)
assert x_1d.shape[1] == 1
assert y_1d.shape[1] == 2
conv_t = mx.gluon.nn.Conv1DTranspose(in_channels=1, channels=2, kernel_size=3, strides=1)
conv_t.initialize(ctx=ctx)
y_1d = conv_t(x_1d)
mx.nd.waitall()
print(conv_t)
assert x_1d.shape[1] == 1
assert y_1d.shape[1] == 2

conv = mx.gluon.nn.Conv2D(in_channels=1, channels=2, kernel_size=3, strides=1)
conv.initialize(ctx=ctx)
y_2d = conv(x_2d)
mx.nd.waitall()
print(conv)
assert x_2d.shape[1] == 1
assert y_2d.shape[1] == 2
assert "1 -> 2" in conv.__repr__()
conv_t = mx.gluon.nn.Conv2DTranspose(in_channels=1, channels=2, kernel_size=3, strides=1)
conv_t.initialize(ctx=ctx)
y_2d = conv_t(x_2d)
mx.nd.waitall()
print(conv_t)
assert x_2d.shape[1] == 1
assert y_2d.shape[1] == 2
assert "1 -> 2" in conv_t.__repr__()

conv = mx.gluon.nn.Conv3D(in_channels=1, channels=2, kernel_size=3, strides=1)
conv.initialize(ctx=ctx)
y_3d = conv(x_3d)
mx.nd.waitall()
assert x_3d.shape[1] == 1
assert y_3d.shape[1] == 2
print(conv)
conv_t = mx.gluon.nn.Conv3DTranspose(in_channels=1, channels=2, kernel_size=3, strides=1)
conv_t.initialize(ctx=ctx)
y_3d = conv_t(x_3d)
mx.nd.waitall()
assert x_3d.shape[1] == 1
assert y_3d.shape[1] == 2
print(conv_t)

@leezu
Copy link
Contributor

leezu commented Oct 13, 2020

@mxnet-bot run ci [centos-cpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [centos-cpu]

@lanking520 lanking520 added pr-awaiting-testing PR is reviewed and waiting CI build and test pr-awaiting-review PR is waiting for code review and removed pr-awaiting-testing PR is reviewed and waiting CI build and test labels Oct 14, 2020
@szha szha merged commit 94b649f into apache:master Oct 14, 2020
@szha
Copy link
Member

szha commented Oct 14, 2020

thanks for the fix @gilbertfrancois @g4b1nagy

chinakook pushed a commit to chinakook/mxnet that referenced this pull request Nov 17, 2020
* Fix the direction of in_channels -> out_channels in the repr function for ConvTranspose classes.

Co-authored-by: g4b1nagy <gabrian.nagy@gmail.com>

Co-authored-by: g4b1nagy <gabrian.nagy@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

pr-awaiting-review PR is waiting for code review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The print output of mx.nn.Conv2dTranspose shows the wrong direction of in and out channels.

5 participants