Pytorch reshape layer. Get 2D output from the embedding layer in pytorch.
Pytorch reshape layer build_dense_network(self. Otherwise, it will be a copy. This seems to be wrong, since the layer dimensions are not depending on the batch size. Reshape class. I have to pass this output to linear layer to map to number of classes(76) required to have for CTC loss. The reshape function can return PyTorch reshape tensor dimension. fc1 = torch. Flatten 3D tensor. 4k 16 16 gold badges 171 171 silver badges 158 158 bronze badges. By understanding its functionality and best practices, developers can Learn how to effectively use the reshape layer in Pytorch for tensor manipulation and model optimization. I know approximately how the loss and the accuracy must be with Keras, and here, they doesn’t change during the epoch. Module) nn. heads, self. squeeze() to get rid of them. output x = Dense(512, activation='relu')(x) predictions = Dense(49*6,activation='sigmoid')(x) reshape=Reshape((49,6))(predictions) model = For a given nn. 4. copy_(lin_layer[:, :, None, None]) Indexing with None adds a dimension with size 1. Conv2d in PyTorch. However, in the original model, the pointwise convolutions are nn. Size([5, 2048])) tensor to, for example, a nn. So, first in the forward method try to reshape it like: 'out. At first glance, they might look almost identical — they both let you reshape tensors into different dimensions. How to do that? PyTorch Forums Model summary and reshaping the output of one layer to feed to next layer. Depending if the desired output should be [16384, 16384] or [3, 3] you should transpose the second or the first matrix, respectively. No need to implement forward at all. modules and only keep the max pool layers, then replace those with average pool layers: >>> maxpools = [k for k, m in model. If you call . In my case, I made sure to apply the patch as early as possible, before importing any modules that depend on it. output dimension after the first layer). Can you help me with some advice? Thank you!!! Hi Miguelvr, We have been using Time distributed layer that is developed by you. Generally used in a model definition. bias); Run PyTorch locally or get started quickly with one of the supported cloud platforms. bias. That being said, it’s also unusual to define a specific shape relative to the batch size, as the model definition is independent of the batch size in a standard Join the PyTorch developer community to contribute, learn, and get your questions answered. Reduce image dimensions in python. Hello, Suppose I have the following network: class Net3(torch. I think that they could be easily converted as such (because the input is a 1x1 image): linear = nn. I’ve been tackling a similar problem as you have in this post. 0. shape) before the entrance to the fully connected layer you will get:. Reshape or torchlayers. models and break them into featurizer + extra layers, but because there is a reshape required before FC layer I will have to know that for example this is torchvision. After my 6th CNN layer output, tensor shape will be (B, C, H, W). This section delves into the concept of tensor views and the various methods available for reshaping tensors, including the use of reshape, reshape_as, and flatten. of 7 runs, 100000 loops each) # reshape 3. I am trying to implement Spatial BatchNorm in PyTorch. The problem is when I try to reshape the output of Linear layer after BatchNorm and ReLU (in fig. Conv1d layers will work for data of any given length, the problem comes at the first Linear layer, because the data length is unknown at initialization time. Here I’m implementing a custom layer that needs multiple reshaping of two input tensors, THE BATCH IS OF THE SHAPE R^(N, a,b,c), where N is the number of tensors in the batch. This method allows you to change the shape of a tensor without changing its data. In this article, we’ll explore how to use torch. Understanding PyTorch Sequential: Layers, I would recommend to check the shape of X before the reshape operation, since flattening the data into the batch dimension is usually wrong as done via X. I would like to build a convolutional neural network for text based applications. Developer Resources. Would the implementation in the PyTorch brings to the table the torch. I wish to train a RNN model such that I can predict for T steps Hi everyone, I am new to Pytorch (and row major calculations). reshape. 61. For use with Sequential. Since the argument t can be any tensor, we pass -1 as the second argument to the reshape() function. Your input in nn. Build innovative and privacy Newer versions of PyTorch allows nn. I first load the pretrained model and weights as below, model = resnext_101_64x4d. There’s some use cases where a Reshape() layer can come in handy, like in embedded systems where you add to your model firstly a reshape, so that all the model is compacted to be flashed in the device, and the reshape can adjust incoming data from sensors Returns a tensor with the same data and number of elements as input, but with the specified shape. Some of these methods may be confusing for new users. If you will add print(x. After sampling from my encoder network, I have an input tensor of shape (1 x 8) for this decoder: class Decoder(nn. Reshape I have dynamically created my network, but I am facing an issue when I try to Reshape my layer from Linear back to Convolutional. When using reshape, it is important to understand how it interacts with the underlying data structure of the tensor. Sequential (I’m using ResNext from link) And I want to replace only the last Linear layer. I’ll however lay out the data first so that Hi! I want to reshape a tensor of size [batch_size, c*h*w] = [24, 1152] into one of size [batch_size, c, h, w] = [24, 128,3,3] but I can’t figure out how to do it. A place to discuss PyTorch code, issues, install, research. When I check the shape of the layer using model[0]. fc. I will include an image of the model as given by the paper publication. e. BatchNorm layers will use the batch statistics and update the running estimates. When possible, the returned tensor will be a view of input. Arbitrary, although all dimensions in the input shape must be known/fixed. Follow edited Jul 11, 2022 at 8:03. running_var + norm. Linear is 32678 but you reshape data into `32768’. I too tried to tackle my problem first by using the nn. size(0),-1)), # Reshape, nn. def flatten (t): t = t. layers. merge-and-run model You can look it up in the literature for more reference on how it works. So you would have: self. Dense as they have used Tensorflow) it is This method is used to reshape the given tensor into a given shape( Change the dimensions) Example 1: Python program to reshape a 1 D tensor to a two-dimensional tensor. This doesn't seem right as the dimensions of input for the dense layer is different. Also after linear layer I have to pass to softmax and CTC How to reshape last layer of pytorch CNN model while doing transfer learning. keras. Also I am using conv1d and I need to use the output of this to conv2d for which I need to add to reshape the output of conv1d. This is pytorch; reshape; tensor; Share. This is my network: (0): Conv2d(1, 32, kernel_s Im the first layer takes 50 as input because the batch size is 50. Join the PyTorch developer community to contribute, learn, and get your questions answered Neural networks comprise of layers/modules that perform operations on data. So i did the assumption that my PyTorch code is not good. contiguous(). Let me know, if this would work. _hidden_net = nb. mm to multiply them. I would like to have Reshape as as nn. Hello, it does not handle the nonlinearity. I have input images of dimensions [3, 512, 512]. As a function (functional form) torch. Use torch. Improve this question. Below is its syntax: Where input is the tensor Reshaping tensors is a fundamental operation in PyTorch, enabling you to manipulate data structures to fit the requirements of different neural network layers or other operations. of channels comes before the width and height of image] Hi fellows, I have a doubt. view function. LSTM outputs a tuple. After that the data needs to be reshaped into sequences of length 8. From the description of lasagne’s InverseLayer, it uses the derivative, so essantially, it effectively provides the backpropagation step of the layer it is based on. How to properly reshape a 3D tensor to a 2D forward linear layer then reshape new 3D tensor's fibers corresponding to the old 3D. Linear(self. 27. I have one batch of 128 images and I extracted 9 features from each images. Find resources and get questions answered. Tuple of integers, does not include the samples dimension (batch size). Sequential easily. Module): def __ Manipulating a dict is simpler that code. You replaced 7 with 6 in your code. The 2D tensor is the output of a linear layer and I want the 4D tensor to be the input of a nn. ipynb. Flatten module, which does the job of nn. I juste want to PyTorch view() vs reshape(): A Practical Guide Here’s the deal: in PyTorch, two popular functions for reshaping tensors are view() and reshape(). Approach 4: reshape. The model contains a number of layers of the shape Conv - [BatchNorm?] - ReLU - [MaxPool?], where the BN and Pooling layers are used only if the corresponding flags are set. Unflattens a tensor dim expanding it to a desired shape. PyTorch provides a lot of methods for the Tensor type. Arguments. Speed check # flatten 3. So now my shape is (1,128,9). That will not work. If so, you can use torch. My version is 1. ConvTranspose2d. Module to do it. Follow answered Apr 14, 2020 at 5:43. I declared the Time distributed layer as follows : 1. Data transformation plays a crucial role in deep learning models, and reshaping is a common transformation required to manipulate the data into the desired form. decoder = nn. g when trying to calculate the loss. weight. Reshaping the dimension of a tensor in PyTorch. There's no such concept as "layer's input". load('resnext_101_64x4d. __init__() self. Linear (2048, 256) layer. In the case of resnet, the repo you linked seem to use the torchvision version, whose My biggest concern (except for the fact that this introduces unnecessary clutter to the network description) is that each “reshape” layer must use reshape and not just view, because the result of transpose is not contiguous, and reshaping the (batch, features, windows) form (or (N, 2C, L) into (N, C, 2L)) would not yield the desired result - in the upscaling example the I have my input image which is 128 x 96 size, and it is fed to the net, after a set of layers, I want to output the image of the same size. kmario23. view(1,-1) if 1==len(x. Apparently there has been a change in how Sequentials (and presumably other Modules) are stored sometime between my prehistoric 0. torchlayers. Modified 5 years, 3 months ago. values = values. You probably want to make your kernel and stride (2, 1) or switch H and W in your input. grad Hello, I have implemented a simple word generating network using a LSTMCell coupled with a Linear layer which works perfectly. 04 µs ± 93 ns per loop (mean ± std. Ask Question Asked 5 years, 3 months ago. reshape(*shape) (aka torch. This model is known as the merge-and-run. resnext_101_64x4d model. There's nn. Reshape (target_shape, ** kwargs) Layer that reshapes inputs into the given shape. Is there an easy way to do this in Pytorch? neural-network; pytorch; Share. Siyovush_Kadyrov (Siyovush Kadyrov) May 22, 2020, 10:07am Step-by-step guide on building YOLOv11 model from scratch using PyTorch for object detection and computer vision tasks. Improve this answer. My input is of the shape [32,784]. Best regards squeeze() and unsqueeze() are useful for adjusting tensor dimensions to fit specific operations or layer requirements. PyTorch offers a simple and efficient method called torch. dev. All three are identical and share the same implementation, the only Additional Keras-like layers (e. You could then feed your (now torch. of 7 runs, 100000 loops each) # view 3. The abstraction breaks. reshape(8, 16, 1, 1)); conv. Learn about the tools and frameworks in the PyTorch Ecosystem. So simply one batch represent one video. Module): # Initiate the network def __init__(self): super(Net3,self). Community. The view() method is often preferred for efficiency as it avoids memory allocation. Here 1 is batch, 128 images in each batch and 9 features of each images. nn Hi, Such chaining of each layer one by one is only expected to work if the original module is a nn. I tried to use a LSTM (both in keras and PyTorch), and the one of PyTorch doesn’t train. It was my understanding that there are matrix multiplication Weights with the input, however, I cannot see how to do that between the weight tensor of shape [100,784] and input We have since then added a nn. Actually I need to reshape each 3D tensor with the size of R^(a,b,c) to be a matrix of the size (a*b,c) and then use torch. 49 µs ± 146 ns per loop (mean ± std. eps)) I am getting stuck when setting the input shape of a tensor from a linear layer to a 2D convolutional transpose layer in the decoder network of my variational autoencoder. reshape(1, - 1) t = t. 3k 21 21 gold badges 119 119 silver badges 152 152 bronze badges. torch. Input shape. For instance, a batch of image data might need to be flattened before feeding into a fully connected layer. optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib. copy_(linear. Linear to accept N-D input tensor, the only constraint is that the last dimension of the input tensor will equal in_features of the linear layer. (norm. Hey guys, I’ve been trying out pytorch for a while and have a somewhat contrived used case: I am trying to change the shape of the weight tensor inside Conv2d layers in order to do some filter pruning on a pre-trained model: I wrote some code to change the shape of all the conv layers in the model. I’ve already tried the . Unflatten(1, (1, 128, 8, 8)), # The first parameters is the dimension you would like to unflatten, note that dimension 0 is usually You have to flatten this to give it to the fully connected layer. anantguptadbl (Anant Gupta) February 6, 2022, 1:52pm The output shape of [15, 1] is a bit weird, since it should be [batch_size, 17*batch_size] based on your model definition. size(0), -1) leads to a tensor with size of (32, 49). bottleneck_size, 4096*2), nn. Contributor Awards - 2023. I now want to use the LSTM class to be able to process the data in batches in order to go faster. ModuleDict and a custom forward function. Learn the Basics. Module because then I can use it in nn. Share. The torch. reshape(N, value_len, self. Module): def __init__(self, latent_size, output_size, kernel1=4, stride1=2, The width of your input is 1, but your kernel has width 2. – If set to False, the layer will not learn an additive bias. Sequential layer aggregation which basically implements passing some x to first layer, then output of this layer to the second layer and so one for all the layers. Linear layers. Doing so by using sequence first will mess up the data. Declared linear layer then give that output to the time distributed layer in the module Flattening is available in three forms in PyTorch. Whats new in PyTorch tutorials. (new in pytorch). If you are not careful, you could change the batch size of X which would then create shape mismatches e. As a side note, I found that torch version 0. The semantics of reshape() are that it may or may not share the storage and you don’t know beforehand. Are you comparing a training run of your model with a manual eval implementation? Let's create a Python function called flatten(): . Viewed 908 times Dense(49*6,activation='sigmoid')(x) reshape=Reshape((49,6))(predictions) model = Model(inputs=base_model. view(out. Edge About PyTorch Edge. StandardNormalNoise) Additional SOTA layers mostly from ImageNet competitions (e. reshape method is a powerful tool. Is there any layer or module in pytorch which works like tensorflow's space_to_depth() function? I have not found any concrete implementation of this operation neither in pytorch nor tensorflow! Could you please give me Is there any layer or module in pytorch which works like tensorflow’s space_to_depth() function? But in here someone said that it is I am trying to convert a tensorflow code to pytorch and I have changed these lines from tensorflow def _build(self, weight_path): self. Input is whatever you pass to forward method, like in your example a single self. Linear(16, 8) conv = nn. linear layer? I assume that your problem is the last two length-1 dimensions of your tensor. g. Every time the length of the input data changes, the output size of Conv1d layers will change, hence a change in the required in_features of the I have a 3D tensor of names that comes out of an LSTM that’s (batch size x name length x embedding size) I’ve been reshaping it to a 2D to put it through a linear layer, because linear layer requires (batch size, linear dimension size) by using the following y0 = output. The images is in sequence, for example 128 frame of a video. Currently I try to train on a multi-label language task with imbalanced class distribution. Below is its syntax: reshaped_tensor = torch. Ecosystem Tools. com/questions/56373603/train-time-series-in-pytorch. When using reshape(), ensure that the new shape is compatible with the original tensor's number of elements. In PyTorch, the -1 Hello everybody, I learned Keras and now i will learn PyTorch, I am a beginner. but by removing the last layers of the model which help generate the outputs, we modify the architecture in such a way that, it will be useful for feature extraction. What am I missing here? [Note that in Pytorch the input is in the following format: [N, C, W, H] so no. When writing models with PyTorch, it is commonly the case that the parameters to a given layer depend on the shape of the output of the previous layer. Inside the model (in init method) I initialize my embeddings as follows: batch_size = 64 embedding_dim = 200 Also asked here. If there was a nn. Alternatively you could also return the intermediate activations from some layers directly in your forward method (if you are using a custom model). reshape() function that can help us easily and efficiently get the job of reshaping tensors done. view(x. size()) else x ),nn. Mateen Ulhaq. linear(784,100). 0 version and the modern era. load_state_dict(torch. So you tell PyTorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself. The same architecture with an LSTM object instance + Linear output layer produces outer nonsense. You set the input size to 32*16*16 which is not the shape of the output image but the number 32/16 represent the number of "channels" dim that the Conv2d expect for the input and what it will output. Size([Batch, 32, 7, 7]) So your I have trained a model using the following code in test_custom_resnet18. 2,378 2 2 gold badges 14 14 silver badges 27 27 bronze badges. Kaushal_Aggarwal (Kaushal Aggarwal) June 12, 2019, 2:25pm 1. Using the batch size as first dimension works well concerning the reshape. Linear(3072,128)), # Linear, Details: I have torch7 model which have following architecture: model: StyleNet I I am trying to build a convolutional neural network to classify different boat types. 1. Here is keras architecture base_model = InceptionV3(weights='imagenet', include_top=False) x = base_model. Familiarize yourself with PyTorch concepts and modules. 0. eval() on this layer or your complete model, the running estimates will be used. flatten applied as: torch. Output: Example 2: Python code to reshape tensors into 4 rows and 2 columns Output: Example 3: Python code to reshape tensor into 8 ro Utilizing the torch. flatten(x). How to handle hidden-cell output of 2-layer LSTM in PyTorch? 0. I am unsure as to how to feed the output of convolutional layers into a linear layer in my network. You should defined the layers using their expected input features, not the number of samples they would see during training/inference. The Linear layer requires a tensor that has the batch instances in the first dimension, but the LSTM returns the last hidden state as shape To convert a tensor to a 1D tensor in PyTorch, the torch. format(index)) filters = If so, you can just replace them with nn. Here is my code of Generator: class Generator(nn. reshape() effectively. reshape() to manipulate the shape of tensors. flatten(). 1 and 1. When working with tensors in PyTorch, understanding reshape Hi, I’m implementing Generator of a GAN and I need to reshape output of Linear Layer to particular dimension, here is the image of the Generator. Sequential(Lambda(lambda x: x. deconv2(fc*(1-fc)) or so) or use the torch. Sequential() as this is how the forward method is implemented here. flatten applied directly on a tensor: x. As a tensor method (oop style) torch. How to reshape a tensor? PyTorch brings to the table the torch. I am getting confused about the input shape to GRU layer. Thank you So the issue is with the way you defined the nn. I am working on 2D Cnn network for OCR. Award winners announced at this year's PyTorch Conference. Since Linear layer weights are of shape out_feat x in_feat, conv weights are out_chan x in_chan x kernel_height x kernel_width, so all you need is to use channels as features and then add two dimensions to the weight: with torch. As a module (layer nn. models. 9. view(-1, output. Instead of fully connecting the input and output, I would like to compute the first feature of output based on the first 4 features of the input, and the second dimension of output based on 5th to 8th features in the input vector, and so Hey, I wonder if it is possible to define a pool layer which output the median value over an specific area instead of max value or mean value only using available pytorch app function? I think this would be helpful but I have no idea how to implement it. named_modules() I came across a function to reshape one of the input embeddings, however I am not sure how it works. how to do I print model summary like what we do in keras I have (for illustration) an LSTM(insize, hiddensize, num_layers=3, bidirectional=True, batch_first=True) and I want to use the last hidden state of each instance in a batch as an input to a Linear layer. pyplot as plt import time Yes, I am aware that the order and timing of imports can affect the effectiveness of monkey patching. Module m you can extract its layer name by using type(m). Reducing the number of channels in a 5D tensor in PyTorch. Here, I would like to talk about view() vs reshape(), transpose() vs permute(). Greetings. head_dim) #Here N, value_len and embed size is the dimension of the tensor "values" I have tried the official documentation but could not make sense of what this function is exactly doing? out = out. resnet34 that does a reshape in its forward layer. My batch size is 64 (64 sentences in each batch), embedding size is 200 and each sentence contains 80 words. reshape(out. Reshape for the particular case of converting from a convolution to a fc layer. target_shape: Target shape. Any idea why the results are different. Get 2D output from the embedding layer in pytorch. nn as nn import torch. Follow edited Dec 31, 2021 at 19:30. autograd. Flatten(). You can define the output shape via the out_features of the linear layer. Follow that you run in your network MNIST which is 1x28x28 and VGG input is 3x224x224. reshape(tensor, shapetuple)) to specify all the dimensions. Dimensionality problem with PyTorch Conv layers-2. View module and used instead in resnet34,resnet34 . Indexing and Slicing. 2. 23 µs ± 228 ns per loop (mean ± std. Actually i am trying to replicate keras structure to pytorch. reshape() function is fundamental for effective tensor manipulation in PyTorch. Using the LSTM layer in encoder in Pytorch. Hello @ptrblck. ) Zero overhead and torchscript support Would anyone be able to clarify how I should reshape this tensor to feed into an nn. Tensor. Understanding torch. Sequential container, but the problem lies in that the nn. 4. LSTM. Hot Network Questions Schoenberg's dynamics don't make sense - how should Join the PyTorch developer community to contribute, learn, and get your questions answered. resnet34 is just an example, but in I want to build a model with several Conv1d layers followed by several Linear layers. reshape(input, shape) In PyTorch, tensors can be manipulated efficiently using views, which allow for memory-efficient operations without unnecessary data copying. layers: Could you explain how following layers in my model works? and are these substitutable in PyTorch? Lambda(lambda x: x. squeeze() return t . shape, I get [100,784]. The thing I'm having problem with is This is a crosspost from https://stackoverflow. Tensor Views Which is wrong as you wild need to transpose one matrix. Now how should i reshape my CNN output tensor to pass to linear layer. reshape(-1,12,186). 3. PolyNet, Squeeze-And-Excitation, StochasticDepth) Useful defaults ("same" padding and default kernel_size=3 for Conv, dropout rates etc. So the issue is even deeper because I want the output to be [1,3], ie, I want the output of the 1d conv layers to be transformed to [batch_size,1,3] by the linear layer Hey there, I guess I am still rather inexperienced with PyTorch and this is the first time I am using a sequence data based learning model, i. pth')) Then how can I replace the last Layer that reshapes inputs into the given shape. expension doesn’t match the incoming number of features, so you would either have to decrease the spatial size of the activation through conv/pool layers or set in_features=51200 for self. My last layer now is ConvAct, and if to check the size of tensor, it is (10, 1, 96, 128), so to have a linear layer which would return me 128*96 image - what input size I should use? I’d feed a tensor of shape (16, 3, 84, 84) to a stack of convolutional layers. The linear transformation is then applied on the last dimension of the tensor. For general modules, you should check how the forward method is implemented to know how they should be used. For instance, Ah! the reshape function has never made so much sense! Im starting out with Machine Learning You could use forward hooks to get the output activations of specific layers as seen in this post. Sequential( torch. __name__. relu layer is called 6 times with different inputs. The flatten() function takes in a tensor t as an argument. input, outputs=reshape) for layer in base_model. I am not sure how to get the output dimension for each layer (e. To get which layer is Hi @Olshansky!. of 7 runs, 100000 loops each) Hi there, I’m trying to do the following: Say I have an input vector with 12 dimensions, I want to output a vector with 3 dimensions. A canonical approach is to filter the layers of model. 0+cpu. ("Pruning Conv-{}". Linear. _context_dim, output_dim=-1, o My first linear layer has 100 neurons, defined as nn. 1 behaves For example, you may want to reshape a 1D tensor (a vector) into a 2D tensor (a matrix) or vice versa. Then manipulating it would have been more straightforward and we would not need to treat it differently. This code measures the inference time of the first fully connected layer of Alexnet as the batch size changes. no_grad(): conv_layer. Tutorials. Master PyTorch basics with our engaging YouTube tutorial series. (reshape(8, 2, -1)). Let me thus share a mockup solution that utilizes torch. For example, the And then reshape the output into a 3D volume and pass it into your de-convolution layers. Hot Network Questions You have guessed the word! (successfully or not necessarily?) What choice of contour is 512*block. Forums. from __future__ import print_function, division import torch import torch. optim as optim from torch. so you could either directly multiply activations in the forward method of your model or write a custom nn. nn namespace provides all the Update: It can be restated as, apply linear layer to flattened view of last k dimension of non contiguous tensor without copying it. The Multiply layer in Keras seems to: Layer that multiplies (element-wise) a list of inputs. I want to know the inference time of a layer in Alexnet. size(-1)) this converts outputs to (batchsize * name length, In training mode nn. You would need to do this yourself (using d/dsigmoid(x) = sigmoid(x)*(1-sigmoid(x)), so reconstruct_2 = self. nn. ccl ccl. . Hot Network Questions Does at-will employment entitle you to quit without giving any notice? What are some real-world examples of statistical models where the dependent variable chronologically occurs before the independent variable? Is ´practical I have a pretrained model with layers stacked in nn. Conv2d(16, 8, 1) conv. I figured out that this might be due to Reshape layer. If the original data is contiguous and has the same stride, the returned tensor will be a view of input (sharing the same data), otherwise it will be a copy. I have the following model, where I removed some of the feed forward layers to decrease factors in the chain of gradients. I wanted to take some torchvision. I am trying to work with MNIST dataset. My network input has dimension $[K, n]$ where $K$ is the number of samples, I want to create a layer that will map this vector of dimension $n$ to a matrix $[d, n I am building a pytorch nn model that uses skip connections between two parallel sequential layers. If it was not for the reshape. asked Jun 5, 2018 at 22:56. 1 Like. shape[0], You can use UNFLATTEN layer, from Pytorch docs:. omaratef3221 (Omar Elgendy) October 11, 2023, 5:37am 14. vhtusdotymcooskehtnjobqqyfdvqbaaccrxxblyjnrosooyrnaffknpjtbvniznclp