Skip to content

Compiling yolov5 #253

@Ownmarc

Description

@Ownmarc

Hey, I am looking to run yolov5 https://github.com/ultralytics/yolov5 model on an inf1 instance for inference.

I am first trying to get the original Coco model to compile but hitting the following error. I have followed many aws tutorials (yolov4 and resnet) and trying to compile on a c5-xlarge instance (4 cpu with 8gb of ram) using the ubuntu 18 DLAMI in the aws_neuron_pytorch_p36 python env.

One thing I noticed is that the neuron-cc requires numpy <= 1.18.4 while yolov5 requires numpy >= 1.18.5 I first made sure the model would run correctly by updating numpy to 1.18.5 and then downgraded numpy to 1.18.4 per neuron-cc requirement before compiling/converting the model.

Not exactly sure where to look to debug this (if at all possible) and would welcome any hint.

fake_image = torch.zeros([1, 3, 608, 608], dtype=torch.float32)
here is the output of torch.neuron.analyze_model(model, example_inputs=[fake_image])

/home/ubuntu/yolov5/models/yolo.py:48: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.grid[i].shape[2:4] != x[i].shape[2:4]:
/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch/jit/_trace.py:940: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
  _force_outplace,
INFO:Neuron:The following operations are currently supported in torch-neuron for this model:
INFO:Neuron:prim::TupleConstruct
INFO:Neuron:aten::permute
INFO:Neuron:aten::slice
INFO:Neuron:prim::Constant
INFO:Neuron:prim::ListConstruct
INFO:Neuron:aten::pow
INFO:Neuron:aten::max_pool2d
INFO:Neuron:aten::upsample_nearest2d
INFO:Neuron:aten::Int
INFO:Neuron:aten::mul
INFO:Neuron:aten::_convolution
INFO:Neuron:prim::NumToTensor
INFO:Neuron:aten::sub
INFO:Neuron:aten::sigmoid
INFO:Neuron:aten::silu
INFO:Neuron:prim::TupleUnpack
INFO:Neuron:aten::expand
INFO:Neuron:aten::contiguous
INFO:Neuron:aten::copy_
INFO:Neuron:aten::size
INFO:Neuron:aten::view
INFO:Neuron:aten::cat
INFO:Neuron:aten::select
INFO:Neuron:aten::add
INFO:Neuron:100.00% of all operations (including primitives) (2369 of 2369) are supported
INFO:Neuron:100.00% of arithmetic operations (304 of 304) are supported

and then I run the compiling function model_neuron = torch.neuron.trace(model, example_inputs=[fake_image], compiler_args="-O2") that gives an error with the following trace :

/home/ubuntu/yolov5/models/yolo.py:48: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.grid[i].shape[2:4] != x[i].shape[2:4]:
/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch/jit/_trace.py:940: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
  _force_outplace,
INFO:Neuron:All operators are compiled by neuron-cc (this does not guarantee that neuron-cc will successfully compile)
INFO:Neuron:Number of arithmetic operators (pre-compilation) before = 304, fused = 304, percent fused = 100.0%
/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch/jit/_trace.py:779: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
  name, func, example_inputs, var_lookup_fn, strict, _force_outplace
INFO:Neuron:Compiler args type is <class 'str'> value is -O2
INFO:Neuron:compiling function _NeuronGraph$1842 with neuron-cc
INFO:Neuron:Compiling with command line: '/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/bin/neuron-cc compile /tmp/tmp274rqrqq/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /tmp/tmp274rqrqq/graph_def.neff --io-config {"inputs": {"tensor.1:0": [[1, 3, 608, 608], "float32"]}, "outputs": ["concat_14:0"]} -O2 --verbose 35'
WARNING:Neuron:torch.neuron.trace failed on _NeuronGraph#0(%[790] : torch.float32(1, 3, 608, 608)):
  Focus#50:
    %[./6] : torch.float32(1, 3, 304, 608) = ./aten::slice#5(%[790])
    %[./12] : torch.float32(1, 3, 304, 304) = ./aten::slice#10(%[./6])
    %[./17] : torch.float32(1, 3, 304, 608) = ./aten::slice#15(%[790])
    %[./22] : torch.float32(1, 3, 304, 304) = ./aten::slice#20(%[./17])
    %[./27] : torch.float32(1, 3, 304, 608) = ./aten::slice#25(%[790])
    %[./32] : torch.float32(1, 3, 304, 304) = ./aten::slice#30(%[./27])
    %[./37] : torch.float32(1, 3, 304, 608) = ./aten::slice#35(%[790])
    %[./42] : torch.float32(1, 3, 304, 304) = ./aten::slice#40(%[./37])
    %[./45] : torch.float32(1, 12, 304, 304) = ./aten::cat#43()
  Focus#50/Conv#44/Conv2d#2:
        %[Focus#50/Conv#44/6] : torch.float32(1, 48, 304, 304) = ./aten::_convolution#20(%[Focus#50/45])
  Focus#50/Conv#44/SiLU#3:
        %[4215] : torch.float32(1, 48, 304, 304) = ./aten::silu_#0(%[Focus#50/Conv#44/6])
  Conv#51/Conv2d#2:
      %[Conv#51/6] : torch.float32(1, 96, 152, 152) = ./aten::_convolution#20(%[4215])
  Conv#51/SiLU#3:
      %[4216] : torch.float32(1, 96, 152, 152) = ./aten::silu_#0(%[Conv#51/6])
  C3#52/Conv#4/Conv2d#2:
        %[C3#52/Conv#4/6] : torch.float32(1, 48, 152, 152) = ./aten::_convolution#20(%[4216])
  C3#52/Conv#4/SiLU#3:
        %[C3#52/13] : torch.float32(1, 48, 152, 152) = ./aten::silu_#0(%[C3#52/Conv#4/6])
  C3#52/Sequential#5/Bottleneck#2/Conv#2/Conv2d#2:
            %[C3#52/Sequential#5/Bottleneck#2/Conv#2/6] : torch.float32(1, 48, 152, 152) = ./aten::_convolution#20(%[C3#52/13])
  C3#52/Sequential#5/Bottleneck#2/Conv#2/SiLU#3:
            %[C3#52/Sequential#5/Bottleneck#2/8] : torch.float32(1, 48, 152, 152) = ./aten::silu_#0(%[C3#52/Sequential#5/Bottleneck#2/Conv#2/6])
  C3#52/Sequential#5/Bottleneck#2/Conv#3/Conv2d#2:
            %[C3#52/Sequential#5/Bottleneck#2/Conv#3/6] : torch.float32(1, 48, 152, 152) = ./aten::_convolution#20(%[C3#52/Sequential#5/Bottleneck#2/8])
  C3#52/Sequential#5/Bottleneck#2/Conv#3/SiLU#3:
            %[C3#52/Sequential#5/Bottleneck#2/9] : torch.float32(1, 48, 152, 152) = ./aten::silu_#0(%[C3#52/Sequential#5/Bottleneck#2/Conv#3/6])
  C3#52/Sequential#5/Bottleneck#2:
        %[C3#52/Sequential#5/6] : torch.float32(1, 48, 152, 152) = ./aten::add#5(%[C3#52/13], %[./9])
  C3#52/Sequential#5/Bottleneck#3/Conv#2/Conv2d#2:
            %[C3#52/Sequential#5/Bottleneck#3/Conv#2/6] : torch.float32(1, 48, 152, 152) = ./aten::_convolution#20(%[C3#52/Sequential#5/6])
  C3#52/Sequential#5/Bottleneck#3/Conv#2/SiLU#3:
            %[C3#52/Sequential#5/Bottleneck#3/8] : torch.float32(1, 48, 152, 152) = ./aten::silu_#0(%[C3#52/Sequential#5/Bottleneck#3/Conv#2/6])
  C3#52/Sequential#5/Bottleneck#3/Conv#3/Conv2d#2:
            %[C3#52/Sequential#5/Bottleneck#3/Conv#3/6] : torch.float32(1, 48, 152, 152) = ./aten::_convolution#20(%[C3#52/Sequential#5/Bottleneck#3/8])
  C3#52/Sequential#5/Bottleneck#3/Conv#3/SiLU#3:
            %[C3#52/Sequential#5/Bottleneck#3/9] : torch.float32(1, 48, 152, 152) = ./aten::silu_#0(%[C3#52/Sequential#5/Bottleneck#3/Conv#3/6])
  C3#52/Sequential#5/Bottleneck#3:
        %[C3#52/14] : torch.float32(1, 48, 152, 152) = ./aten::add#5(%[C3#52/Sequential#5/6], %[./9])
  C3#52/Conv#6/Conv2d#2:
        %[C3#52/Conv#6/6] : torch.float32(1, 48, 152, 152) = ./aten::_convolution#20(%[4216])
  C3#52/Conv#6/SiLU#3:
        %[C3#52/15] : torch.float32(1, 48, 152, 152) = ./aten::silu_#0(%[C3#52/Conv#6/6])
  C3#52:
    %[./11] : torch.float32(1, 96, 152, 152) = ./aten::cat#9()
  C3#52/Conv#10/Conv2d#2:
        %[C3#52/Conv#10/6] : torch.float32(1, 96, 152, 152) = ./aten::_convolution#20(%[C3#52/11])
  C3#52/Conv#10/SiLU#3:
        %[4217] : torch.float32(1, 96, 152, 152) = ./aten::silu_#0(%[C3#52/Conv#10/6])
  Conv#53/Conv2d#2:
      %[Conv#53/6] : torch.float32(1, 192, 76, 76) = ./aten::_convolution#20(%[4217])
  Conv#53/SiLU#3:
      %[4218] : torch.float32(1, 192, 76, 76) = ./aten::silu_#0(%[Conv#53/6])
  C3#54/Conv#4/Conv2d#2:
        %[C3#54/Conv#4/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[4218])
  C3#54/Conv#4/SiLU#3:
        %[C3#54/13] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Conv#4/6])
  C3#54/Sequential#5/Bottleneck#6/Conv#2/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#6/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/13])
  C3#54/Sequential#5/Bottleneck#6/Conv#2/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#6/8] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#6/Conv#2/6])
  C3#54/Sequential#5/Bottleneck#6/Conv#3/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#6/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/Bottleneck#6/8])
  C3#54/Sequential#5/Bottleneck#6/Conv#3/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#6/9] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#6/Conv#3/6])
  C3#54/Sequential#5/Bottleneck#6:
        %[C3#54/Sequential#5/14] : torch.float32(1, 96, 76, 76) = ./aten::add#5(%[C3#54/13], %[./9])
  C3#54/Sequential#5/Bottleneck#7/Conv#2/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#7/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/14])
  C3#54/Sequential#5/Bottleneck#7/Conv#2/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#7/8] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#7/Conv#2/6])
  C3#54/Sequential#5/Bottleneck#7/Conv#3/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#7/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/Bottleneck#7/8])
  C3#54/Sequential#5/Bottleneck#7/Conv#3/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#7/9] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#7/Conv#3/6])
  C3#54/Sequential#5/Bottleneck#7:
        %[C3#54/Sequential#5/15] : torch.float32(1, 96, 76, 76) = ./aten::add#5(%[C3#54/Sequential#5/14], %[./9])
  C3#54/Sequential#5/Bottleneck#8/Conv#2/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#8/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/15])
  C3#54/Sequential#5/Bottleneck#8/Conv#2/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#8/8] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#8/Conv#2/6])
  C3#54/Sequential#5/Bottleneck#8/Conv#3/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#8/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/Bottleneck#8/8])
  C3#54/Sequential#5/Bottleneck#8/Conv#3/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#8/9] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#8/Conv#3/6])
  C3#54/Sequential#5/Bottleneck#8:
        %[C3#54/Sequential#5/16] : torch.float32(1, 96, 76, 76) = ./aten::add#5(%[C3#54/Sequential#5/15], %[./9])
  C3#54/Sequential#5/Bottleneck#9/Conv#2/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#9/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/16])
  C3#54/Sequential#5/Bottleneck#9/Conv#2/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#9/8] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#9/Conv#2/6])
  C3#54/Sequential#5/Bottleneck#9/Conv#3/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#9/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/Bottleneck#9/8])
  C3#54/Sequential#5/Bottleneck#9/Conv#3/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#9/9] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#9/Conv#3/6])
  C3#54/Sequential#5/Bottleneck#9:
        %[C3#54/Sequential#5/17] : torch.float32(1, 96, 76, 76) = ./aten::add#5(%[C3#54/Sequential#5/16], %[./9])
  C3#54/Sequential#5/Bottleneck#10/Conv#2/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#10/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/17])
  C3#54/Sequential#5/Bottleneck#10/Conv#2/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#10/8] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#10/Conv#2/6])
  C3#54/Sequential#5/Bottleneck#10/Conv#3/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#10/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/Bottleneck#10/8])
  C3#54/Sequential#5/Bottleneck#10/Conv#3/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#10/9] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#10/Conv#3/6])
  C3#54/Sequential#5/Bottleneck#10:
        %[C3#54/Sequential#5/18] : torch.float32(1, 96, 76, 76) = ./aten::add#5(%[C3#54/Sequential#5/17], %[./9])
  C3#54/Sequential#5/Bottleneck#11/Conv#2/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#11/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/18])
  C3#54/Sequential#5/Bottleneck#11/Conv#2/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#11/8] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#11/Conv#2/6])
  C3#54/Sequential#5/Bottleneck#11/Conv#3/Conv2d#2:
            %[C3#54/Sequential#5/Bottleneck#11/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#54/Sequential#5/Bottleneck#11/8])
  C3#54/Sequential#5/Bottleneck#11/Conv#3/SiLU#3:
            %[C3#54/Sequential#5/Bottleneck#11/9] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Sequential#5/Bottleneck#11/Conv#3/6])
  C3#54/Sequential#5/Bottleneck#11:
        %[C3#54/14] : torch.float32(1, 96, 76, 76) = ./aten::add#5(%[C3#54/Sequential#5/18], %[./9])
  C3#54/Conv#6/Conv2d#2:
        %[C3#54/Conv#6/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[4218])
  C3#54/Conv#6/SiLU#3:
        %[C3#54/15] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#54/Conv#6/6])
  C3#54:
    %[./11] : torch.float32(1, 192, 76, 76) = ./aten::cat#9()
  C3#54/Conv#10/Conv2d#2:
        %[C3#54/Conv#10/6] : torch.float32(1, 192, 76, 76) = ./aten::_convolution#20(%[C3#54/11])
  C3#54/Conv#10/SiLU#3:
        %[4219] : torch.float32(1, 192, 76, 76) = ./aten::silu_#0(%[C3#54/Conv#10/6])
  Conv#55/Conv2d#2:
      %[Conv#55/6] : torch.float32(1, 384, 38, 38) = ./aten::_convolution#20(%[4219])
  Conv#55/SiLU#3:
      %[4220] : torch.float32(1, 384, 38, 38) = ./aten::silu_#0(%[Conv#55/6])
  C3#56/Conv#4/Conv2d#2:
        %[C3#56/Conv#4/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4220])
  C3#56/Conv#4/SiLU#3:
        %[C3#56/13] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Conv#4/6])
  C3#56/Sequential#5/Bottleneck#6/Conv#2/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#6/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/13])
  C3#56/Sequential#5/Bottleneck#6/Conv#2/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#6/8] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#6/Conv#2/6])
  C3#56/Sequential#5/Bottleneck#6/Conv#3/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#6/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/Bottleneck#6/8])
  C3#56/Sequential#5/Bottleneck#6/Conv#3/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#6/9] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#6/Conv#3/6])
  C3#56/Sequential#5/Bottleneck#6:
        %[C3#56/Sequential#5/14] : torch.float32(1, 192, 38, 38) = ./aten::add#5(%[C3#56/13], %[./9])
  C3#56/Sequential#5/Bottleneck#7/Conv#2/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#7/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/14])
  C3#56/Sequential#5/Bottleneck#7/Conv#2/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#7/8] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#7/Conv#2/6])
  C3#56/Sequential#5/Bottleneck#7/Conv#3/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#7/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/Bottleneck#7/8])
  C3#56/Sequential#5/Bottleneck#7/Conv#3/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#7/9] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#7/Conv#3/6])
  C3#56/Sequential#5/Bottleneck#7:
        %[C3#56/Sequential#5/15] : torch.float32(1, 192, 38, 38) = ./aten::add#5(%[C3#56/Sequential#5/14], %[./9])
  C3#56/Sequential#5/Bottleneck#8/Conv#2/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#8/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/15])
  C3#56/Sequential#5/Bottleneck#8/Conv#2/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#8/8] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#8/Conv#2/6])
  C3#56/Sequential#5/Bottleneck#8/Conv#3/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#8/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/Bottleneck#8/8])
  C3#56/Sequential#5/Bottleneck#8/Conv#3/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#8/9] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#8/Conv#3/6])
  C3#56/Sequential#5/Bottleneck#8:
        %[C3#56/Sequential#5/16] : torch.float32(1, 192, 38, 38) = ./aten::add#5(%[C3#56/Sequential#5/15], %[./9])
  C3#56/Sequential#5/Bottleneck#9/Conv#2/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#9/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/16])
  C3#56/Sequential#5/Bottleneck#9/Conv#2/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#9/8] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#9/Conv#2/6])
  C3#56/Sequential#5/Bottleneck#9/Conv#3/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#9/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/Bottleneck#9/8])
  C3#56/Sequential#5/Bottleneck#9/Conv#3/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#9/9] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#9/Conv#3/6])
  C3#56/Sequential#5/Bottleneck#9:
        %[C3#56/Sequential#5/17] : torch.float32(1, 192, 38, 38) = ./aten::add#5(%[C3#56/Sequential#5/16], %[./9])
  C3#56/Sequential#5/Bottleneck#10/Conv#2/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#10/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/17])
  C3#56/Sequential#5/Bottleneck#10/Conv#2/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#10/8] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#10/Conv#2/6])
  C3#56/Sequential#5/Bottleneck#10/Conv#3/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#10/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/Bottleneck#10/8])
  C3#56/Sequential#5/Bottleneck#10/Conv#3/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#10/9] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#10/Conv#3/6])
  C3#56/Sequential#5/Bottleneck#10:
        %[C3#56/Sequential#5/18] : torch.float32(1, 192, 38, 38) = ./aten::add#5(%[C3#56/Sequential#5/17], %[./9])
  C3#56/Sequential#5/Bottleneck#11/Conv#2/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#11/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/18])
  C3#56/Sequential#5/Bottleneck#11/Conv#2/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#11/8] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#11/Conv#2/6])
  C3#56/Sequential#5/Bottleneck#11/Conv#3/Conv2d#2:
            %[C3#56/Sequential#5/Bottleneck#11/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#56/Sequential#5/Bottleneck#11/8])
  C3#56/Sequential#5/Bottleneck#11/Conv#3/SiLU#3:
            %[C3#56/Sequential#5/Bottleneck#11/9] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Sequential#5/Bottleneck#11/Conv#3/6])
  C3#56/Sequential#5/Bottleneck#11:
        %[C3#56/14] : torch.float32(1, 192, 38, 38) = ./aten::add#5(%[C3#56/Sequential#5/18], %[./9])
  C3#56/Conv#6/Conv2d#2:
        %[C3#56/Conv#6/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4220])
  C3#56/Conv#6/SiLU#3:
        %[C3#56/15] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#56/Conv#6/6])
  C3#56:
    %[./11] : torch.float32(1, 384, 38, 38) = ./aten::cat#9()
  C3#56/Conv#10/Conv2d#2:
        %[C3#56/Conv#10/6] : torch.float32(1, 384, 38, 38) = ./aten::_convolution#20(%[C3#56/11])
  C3#56/Conv#10/SiLU#3:
        %[4221] : torch.float32(1, 384, 38, 38) = ./aten::silu_#0(%[C3#56/Conv#10/6])
  Conv#57/Conv2d#2:
      %[Conv#57/6] : torch.float32(1, 768, 19, 19) = ./aten::_convolution#20(%[4221])
  Conv#57/SiLU#3:
      %[4222] : torch.float32(1, 768, 19, 19) = ./aten::silu_#0(%[Conv#57/6])
  SPP#58/Conv#8/Conv2d#2:
        %[SPP#58/Conv#8/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4222])
  SPP#58/Conv#8/SiLU#3:
        %[SPP#58/18] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[SPP#58/Conv#8/6])
  SPP#58/MaxPool2d#9:
      %[SPP#58/19] : torch.float32(1, 384, 19, 19) = ./aten::max_pool2d#13(%[SPP#58/18])
  SPP#58/MaxPool2d#10:
      %[SPP#58/20] : torch.float32(1, 384, 19, 19) = ./aten::max_pool2d#13(%[SPP#58/18])
  SPP#58/MaxPool2d#11:
      %[SPP#58/21] : torch.float32(1, 384, 19, 19) = ./aten::max_pool2d#13(%[SPP#58/18])
  SPP#58:
    %[./16] : torch.float32(1, 1536, 19, 19) = ./aten::cat#14()
  SPP#58/Conv#15/Conv2d#2:
        %[SPP#58/Conv#15/6] : torch.float32(1, 768, 19, 19) = ./aten::_convolution#20(%[SPP#58/16])
  SPP#58/Conv#15/SiLU#3:
        %[4223] : torch.float32(1, 768, 19, 19) = ./aten::silu_#0(%[SPP#58/Conv#15/6])
  C3#59/Conv#4/Conv2d#2:
        %[C3#59/Conv#4/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4223])
  C3#59/Conv#4/SiLU#3:
        %[C3#59/13] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#59/Conv#4/6])
  C3#59/Sequential#5/Bottleneck#2/Conv#2/Conv2d#2:
            %[C3#59/Sequential#5/Bottleneck#2/Conv#2/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#59/13])
  C3#59/Sequential#5/Bottleneck#2/Conv#2/SiLU#3:
            %[C3#59/Sequential#5/Bottleneck#2/6] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#59/Sequential#5/Bottleneck#2/Conv#2/6])
  C3#59/Sequential#5/Bottleneck#2/Conv#3/Conv2d#2:
            %[C3#59/Sequential#5/Bottleneck#2/Conv#3/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#59/Sequential#5/Bottleneck#2/6])
  C3#59/Sequential#5/Bottleneck#2/Conv#3/SiLU#3:
            %[C3#59/Sequential#5/6] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#59/Sequential#5/Bottleneck#2/Conv#3/6])
  C3#59/Sequential#5/Bottleneck#3/Conv#2/Conv2d#2:
            %[C3#59/Sequential#5/Bottleneck#3/Conv#2/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#59/Sequential#5/6])
  C3#59/Sequential#5/Bottleneck#3/Conv#2/SiLU#3:
            %[C3#59/Sequential#5/Bottleneck#3/6] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#59/Sequential#5/Bottleneck#3/Conv#2/6])
  C3#59/Sequential#5/Bottleneck#3/Conv#3/Conv2d#2:
            %[C3#59/Sequential#5/Bottleneck#3/Conv#3/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#59/Sequential#5/Bottleneck#3/6])
  C3#59/Sequential#5/Bottleneck#3/Conv#3/SiLU#3:
            %[C3#59/14] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#59/Sequential#5/Bottleneck#3/Conv#3/6])
  C3#59/Conv#6/Conv2d#2:
        %[C3#59/Conv#6/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4223])
  C3#59/Conv#6/SiLU#3:
        %[C3#59/15] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#59/Conv#6/6])
  C3#59:
    %[./11] : torch.float32(1, 768, 19, 19) = ./aten::cat#9()
  C3#59/Conv#10/Conv2d#2:
        %[C3#59/Conv#10/6] : torch.float32(1, 768, 19, 19) = ./aten::_convolution#20(%[C3#59/11])
  C3#59/Conv#10/SiLU#3:
        %[4224] : torch.float32(1, 768, 19, 19) = ./aten::silu_#0(%[C3#59/Conv#10/6])
  Conv#60/Conv2d#2:
      %[Conv#60/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4224])
  Conv#60/SiLU#3:
      %[4225] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[Conv#60/6])
  Upsample#61:
    %[4226] : torch.float32(1, 384, 38, 38) = ./aten::upsample_nearest2d#4(%[4225])
  Concat#62:
    %[4227] : torch.float32(1, 768, 38, 38) = ./aten::cat#2()
  C3#63/Conv#4/Conv2d#2:
        %[C3#63/Conv#4/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4227])
  C3#63/Conv#4/SiLU#3:
        %[C3#63/13] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#63/Conv#4/6])
  C3#63/Sequential#5/Bottleneck#2/Conv#2/Conv2d#2:
            %[C3#63/Sequential#5/Bottleneck#2/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#63/13])
  C3#63/Sequential#5/Bottleneck#2/Conv#2/SiLU#3:
            %[C3#63/Sequential#5/Bottleneck#2/6] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#63/Sequential#5/Bottleneck#2/Conv#2/6])
  C3#63/Sequential#5/Bottleneck#2/Conv#3/Conv2d#2:
            %[C3#63/Sequential#5/Bottleneck#2/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#63/Sequential#5/Bottleneck#2/6])
  C3#63/Sequential#5/Bottleneck#2/Conv#3/SiLU#3:
            %[C3#63/Sequential#5/6] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#63/Sequential#5/Bottleneck#2/Conv#3/6])
  C3#63/Sequential#5/Bottleneck#3/Conv#2/Conv2d#2:
            %[C3#63/Sequential#5/Bottleneck#3/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#63/Sequential#5/6])
  C3#63/Sequential#5/Bottleneck#3/Conv#2/SiLU#3:
            %[C3#63/Sequential#5/Bottleneck#3/6] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#63/Sequential#5/Bottleneck#3/Conv#2/6])
  C3#63/Sequential#5/Bottleneck#3/Conv#3/Conv2d#2:
            %[C3#63/Sequential#5/Bottleneck#3/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#63/Sequential#5/Bottleneck#3/6])
  C3#63/Sequential#5/Bottleneck#3/Conv#3/SiLU#3:
            %[C3#63/14] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#63/Sequential#5/Bottleneck#3/Conv#3/6])
  C3#63/Conv#6/Conv2d#2:
        %[C3#63/Conv#6/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4227])
  C3#63/Conv#6/SiLU#3:
        %[C3#63/15] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#63/Conv#6/6])
  C3#63:
    %[./11] : torch.float32(1, 384, 38, 38) = ./aten::cat#9()
  C3#63/Conv#10/Conv2d#2:
        %[C3#63/Conv#10/6] : torch.float32(1, 384, 38, 38) = ./aten::_convolution#20(%[C3#63/11])
  C3#63/Conv#10/SiLU#3:
        %[4228] : torch.float32(1, 384, 38, 38) = ./aten::silu_#0(%[C3#63/Conv#10/6])
  Conv#64/Conv2d#2:
      %[Conv#64/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4228])
  Conv#64/SiLU#3:
      %[4229] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[Conv#64/6])
  Upsample#65:
    %[4230] : torch.float32(1, 192, 76, 76) = ./aten::upsample_nearest2d#4(%[4229])
  Concat#66:
    %[4231] : torch.float32(1, 384, 76, 76) = ./aten::cat#2()
  C3#67/Conv#4/Conv2d#2:
        %[C3#67/Conv#4/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[4231])
  C3#67/Conv#4/SiLU#3:
        %[C3#67/13] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#67/Conv#4/6])
  C3#67/Sequential#5/Bottleneck#2/Conv#2/Conv2d#2:
            %[C3#67/Sequential#5/Bottleneck#2/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#67/13])
  C3#67/Sequential#5/Bottleneck#2/Conv#2/SiLU#3:
            %[C3#67/Sequential#5/Bottleneck#2/6] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#67/Sequential#5/Bottleneck#2/Conv#2/6])
  C3#67/Sequential#5/Bottleneck#2/Conv#3/Conv2d#2:
            %[C3#67/Sequential#5/Bottleneck#2/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#67/Sequential#5/Bottleneck#2/6])
  C3#67/Sequential#5/Bottleneck#2/Conv#3/SiLU#3:
            %[C3#67/Sequential#5/6] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#67/Sequential#5/Bottleneck#2/Conv#3/6])
  C3#67/Sequential#5/Bottleneck#3/Conv#2/Conv2d#2:
            %[C3#67/Sequential#5/Bottleneck#3/Conv#2/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#67/Sequential#5/6])
  C3#67/Sequential#5/Bottleneck#3/Conv#2/SiLU#3:
            %[C3#67/Sequential#5/Bottleneck#3/6] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#67/Sequential#5/Bottleneck#3/Conv#2/6])
  C3#67/Sequential#5/Bottleneck#3/Conv#3/Conv2d#2:
            %[C3#67/Sequential#5/Bottleneck#3/Conv#3/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[C3#67/Sequential#5/Bottleneck#3/6])
  C3#67/Sequential#5/Bottleneck#3/Conv#3/SiLU#3:
            %[C3#67/14] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#67/Sequential#5/Bottleneck#3/Conv#3/6])
  C3#67/Conv#6/Conv2d#2:
        %[C3#67/Conv#6/6] : torch.float32(1, 96, 76, 76) = ./aten::_convolution#20(%[4231])
  C3#67/Conv#6/SiLU#3:
        %[C3#67/15] : torch.float32(1, 96, 76, 76) = ./aten::silu_#0(%[C3#67/Conv#6/6])
  C3#67:
    %[./11] : torch.float32(1, 192, 76, 76) = ./aten::cat#9()
  C3#67/Conv#10/Conv2d#2:
        %[C3#67/Conv#10/6] : torch.float32(1, 192, 76, 76) = ./aten::_convolution#20(%[C3#67/11])
  C3#67/Conv#10/SiLU#3:
        %[4232] : torch.float32(1, 192, 76, 76) = ./aten::silu_#0(%[C3#67/Conv#10/6])
  Conv#68/Conv2d#2:
      %[Conv#68/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4232])
  Conv#68/SiLU#3:
      %[4233] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[Conv#68/6])
  Concat#69:
    %[4234] : torch.float32(1, 384, 38, 38) = ./aten::cat#2()
  C3#70/Conv#4/Conv2d#2:
        %[C3#70/Conv#4/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4234])
  C3#70/Conv#4/SiLU#3:
        %[C3#70/13] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#70/Conv#4/6])
  C3#70/Sequential#5/Bottleneck#2/Conv#2/Conv2d#2:
            %[C3#70/Sequential#5/Bottleneck#2/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#70/13])
  C3#70/Sequential#5/Bottleneck#2/Conv#2/SiLU#3:
            %[C3#70/Sequential#5/Bottleneck#2/6] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#70/Sequential#5/Bottleneck#2/Conv#2/6])
  C3#70/Sequential#5/Bottleneck#2/Conv#3/Conv2d#2:
            %[C3#70/Sequential#5/Bottleneck#2/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#70/Sequential#5/Bottleneck#2/6])
  C3#70/Sequential#5/Bottleneck#2/Conv#3/SiLU#3:
            %[C3#70/Sequential#5/6] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#70/Sequential#5/Bottleneck#2/Conv#3/6])
  C3#70/Sequential#5/Bottleneck#3/Conv#2/Conv2d#2:
            %[C3#70/Sequential#5/Bottleneck#3/Conv#2/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#70/Sequential#5/6])
  C3#70/Sequential#5/Bottleneck#3/Conv#2/SiLU#3:
            %[C3#70/Sequential#5/Bottleneck#3/6] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#70/Sequential#5/Bottleneck#3/Conv#2/6])
  C3#70/Sequential#5/Bottleneck#3/Conv#3/Conv2d#2:
            %[C3#70/Sequential#5/Bottleneck#3/Conv#3/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[C3#70/Sequential#5/Bottleneck#3/6])
  C3#70/Sequential#5/Bottleneck#3/Conv#3/SiLU#3:
            %[C3#70/14] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#70/Sequential#5/Bottleneck#3/Conv#3/6])
  C3#70/Conv#6/Conv2d#2:
        %[C3#70/Conv#6/6] : torch.float32(1, 192, 38, 38) = ./aten::_convolution#20(%[4234])
  C3#70/Conv#6/SiLU#3:
        %[C3#70/15] : torch.float32(1, 192, 38, 38) = ./aten::silu_#0(%[C3#70/Conv#6/6])
  C3#70:
    %[./11] : torch.float32(1, 384, 38, 38) = ./aten::cat#9()
  C3#70/Conv#10/Conv2d#2:
        %[C3#70/Conv#10/6] : torch.float32(1, 384, 38, 38) = ./aten::_convolution#20(%[C3#70/11])
  C3#70/Conv#10/SiLU#3:
        %[4235] : torch.float32(1, 384, 38, 38) = ./aten::silu_#0(%[C3#70/Conv#10/6])
  Conv#71/Conv2d#2:
      %[Conv#71/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4235])
  Conv#71/SiLU#3:
      %[4236] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[Conv#71/6])
  Concat#72:
    %[4237] : torch.float32(1, 768, 19, 19) = ./aten::cat#2()
  C3#73/Conv#4/Conv2d#2:
        %[C3#73/Conv#4/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4237])
  C3#73/Conv#4/SiLU#3:
        %[C3#73/13] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#73/Conv#4/6])
  C3#73/Sequential#5/Bottleneck#2/Conv#2/Conv2d#2:
            %[C3#73/Sequential#5/Bottleneck#2/Conv#2/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#73/13])
  C3#73/Sequential#5/Bottleneck#2/Conv#2/SiLU#3:
            %[C3#73/Sequential#5/Bottleneck#2/6] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#73/Sequential#5/Bottleneck#2/Conv#2/6])
  C3#73/Sequential#5/Bottleneck#2/Conv#3/Conv2d#2:
            %[C3#73/Sequential#5/Bottleneck#2/Conv#3/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#73/Sequential#5/Bottleneck#2/6])
  C3#73/Sequential#5/Bottleneck#2/Conv#3/SiLU#3:
            %[C3#73/Sequential#5/6] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#73/Sequential#5/Bottleneck#2/Conv#3/6])
  C3#73/Sequential#5/Bottleneck#3/Conv#2/Conv2d#2:
            %[C3#73/Sequential#5/Bottleneck#3/Conv#2/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#73/Sequential#5/6])
  C3#73/Sequential#5/Bottleneck#3/Conv#2/SiLU#3:
            %[C3#73/Sequential#5/Bottleneck#3/6] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#73/Sequential#5/Bottleneck#3/Conv#2/6])
  C3#73/Sequential#5/Bottleneck#3/Conv#3/Conv2d#2:
            %[C3#73/Sequential#5/Bottleneck#3/Conv#3/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[C3#73/Sequential#5/Bottleneck#3/6])
  C3#73/Sequential#5/Bottleneck#3/Conv#3/SiLU#3:
            %[C3#73/14] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#73/Sequential#5/Bottleneck#3/Conv#3/6])
  C3#73/Conv#6/Conv2d#2:
        %[C3#73/Conv#6/6] : torch.float32(1, 384, 19, 19) = ./aten::_convolution#20(%[4237])
  C3#73/Conv#6/SiLU#3:
        %[C3#73/15] : torch.float32(1, 384, 19, 19) = ./aten::silu_#0(%[C3#73/Conv#6/6])
  C3#73:
    %[./11] : torch.float32(1, 768, 19, 19) = ./aten::cat#9()
  C3#73/Conv#10/Conv2d#2:
        %[C3#73/Conv#10/6] : torch.float32(1, 768, 19, 19) = ./aten::_convolution#20(%[C3#73/11])
  C3#73/Conv#10/SiLU#3:
        %[4238] : torch.float32(1, 768, 19, 19) = ./aten::silu_#0(%[C3#73/Conv#10/6])
  Detect#74/Conv2d#7:
      %[Detect#74/433] : torch.float32(1, 255, 76, 76) = ./aten::_convolution#20(%[4232])
  Detect#74:
    %[./11] : 1 = ./aten::size#9(%[./433])
    %[./13] : torch.int32() = ./aten::Int#11()
    %[./14] : torch.int32() = ./aten::Int#12()
    %[./19] : 76 = ./aten::size#14(%[./433])
    %[./21] : torch.int32() = ./aten::Int#16()
    %[./23] : 76 = ./aten::size#18(%[./433])
    %[./25] : torch.int32() = ./aten::Int#20()
    %[./29] : torch.float32(1, 3, 85, 76, 76) = ./aten::view#24(%[./433])
    %[./36] : torch.float32(1, 3, 76, 76, 85) = ./aten::permute#31(%[./29])
    %[./38] : torch.float32(1, 3, 76, 76, 85) = ./aten::contiguous#33(%[./36])
    %[./72] : torch.float32(1, 3, 76, 76, 85) = ./aten::sigmoid#35(%[./38])
    %[./77] : torch.float32(1, 3, 76, 76, 2) = ./aten::slice#40(%[./72])
    %[./79] : torch.float32(1, 3, 76, 76, 2) = ./aten::mul#42(%[./77])
    %[./82] : torch.float32(1, 3, 76, 76, 2) = ./aten::sub#45(%[./79])
    %[./84] : torch.float32(1, 3, 76, 76, 2) = ./aten::add#47(%[./82])
    %[./88] : torch.float32() = ./aten::select#51()
    %[./89] : torch.float32(1, 3, 76, 76, 2) = ./aten::mul#52(%[./84], %[./88])
    %[./94] : torch.float32(1, 3, 76, 76, 2) = ./aten::slice#57(%[./72])
    %[./100] : torch.float32(3, 76, 76, 2) = ./aten::view#63(%[./89])
    %[./108] : torch.float32(1, 3, 76, 76, 2) = ./aten::expand#71(%[./100])
    %[./110] : torch.float32(1, 3, 76, 76, 2) = ./aten::copy_#73(%[./94], %[./108])
    %[./115] : torch.float32(1, 3, 76, 76, 2) = ./aten::slice#78(%[./72])
    %[./117] : torch.float32(1, 3, 76, 76, 2) = ./aten::mul#80(%[./115])
    %[./119] : torch.float32(1, 3, 76, 76, 2) = ./aten::pow#82(%[./117])
    %[./122] : torch.float32(1, 3, 1, 1, 2) = ./aten::select#85()
    %[./123] : torch.float32(1, 3, 76, 76, 2) = ./aten::mul#86(%[./119], %[./122])
    %[./128] : torch.float32(1, 3, 76, 76, 2) = ./aten::slice#91(%[./72])
    %[./134] : torch.float32(3, 76, 76, 2) = ./aten::view#97(%[./123])
    %[./142] : torch.float32(1, 3, 76, 76, 2) = ./aten::expand#105(%[./134])
    %[./144] : torch.float32(1, 3, 76, 76, 2) = ./aten::copy_#107(%[./128], %[./142])
    %[./148] : torch.float32(1, 17328, 85) = ./aten::view#111(%[./72])
  Detect#74/Conv2d#112:
      %[Detect#74/434] : torch.float32(1, 255, 38, 38) = ./aten::_convolution#20(%[4235])
    %[./152] : 1 = ./aten::size#114(%[./434])
    %[./154] : torch.int32() = ./aten::Int#116()
    %[./155] : torch.int32() = ./aten::Int#117()
    %[./160] : 38 = ./aten::size#119(%[./434])
    %[./162] : torch.int32() = ./aten::Int#121()
    %[./164] : 38 = ./aten::size#123(%[./434])
    %[./166] : torch.int32() = ./aten::Int#125()
    %[./170] : torch.float32(1, 3, 85, 38, 38) = ./aten::view#129(%[./434])
    %[./177] : torch.float32(1, 3, 38, 38, 85) = ./aten::permute#136(%[./170])
    %[./179] : torch.float32(1, 3, 38, 38, 85) = ./aten::contiguous#138(%[./177])
    %[./213] : torch.float32(1, 3, 38, 38, 85) = ./aten::sigmoid#140(%[./179])
    %[./218] : torch.float32(1, 3, 38, 38, 2) = ./aten::slice#145(%[./213])
    %[./220] : torch.float32(1, 3, 38, 38, 2) = ./aten::mul#147(%[./218])
    %[./223] : torch.float32(1, 3, 38, 38, 2) = ./aten::sub#150(%[./220])
    %[./225] : torch.float32(1, 3, 38, 38, 2) = ./aten::add#152(%[./223])
    %[./228] : torch.float32() = ./aten::select#155()
    %[./229] : torch.float32(1, 3, 38, 38, 2) = ./aten::mul#156(%[./225], %[./228])
    %[./234] : torch.float32(1, 3, 38, 38, 2) = ./aten::slice#161(%[./213])
    %[./240] : torch.float32(3, 38, 38, 2) = ./aten::view#167(%[./229])
    %[./248] : torch.float32(1, 3, 38, 38, 2) = ./aten::expand#175(%[./240])
    %[./250] : torch.float32(1, 3, 38, 38, 2) = ./aten::copy_#177(%[./234], %[./248])
    %[./255] : torch.float32(1, 3, 38, 38, 2) = ./aten::slice#182(%[./213])
    %[./257] : torch.float32(1, 3, 38, 38, 2) = ./aten::mul#184(%[./255])
    %[./259] : torch.float32(1, 3, 38, 38, 2) = ./aten::pow#186(%[./257])
    %[./262] : torch.float32(1, 3, 1, 1, 2) = ./aten::select#189()
    %[./263] : torch.float32(1, 3, 38, 38, 2) = ./aten::mul#190(%[./259], %[./262])
    %[./268] : torch.float32(1, 3, 38, 38, 2) = ./aten::slice#195(%[./213])
    %[./274] : torch.float32(3, 38, 38, 2) = ./aten::view#201(%[./263])
    %[./282] : torch.float32(1, 3, 38, 38, 2) = ./aten::expand#209(%[./274])
    %[./284] : torch.float32(1, 3, 38, 38, 2) = ./aten::copy_#211(%[./268], %[./282])
    %[./288] : torch.float32(1, 4332, 85) = ./aten::view#215(%[./213])
  Detect#74/Conv2d#216:
      %[Detect#74/435] : torch.float32(1, 255, 19, 19) = ./aten::_convolution#20(%[4238])
    %[./292] : 1 = ./aten::size#218(%[./435])
    %[./294] : torch.int32() = ./aten::Int#220()
    %[./295] : torch.int32() = ./aten::Int#221()
    %[./300] : 19 = ./aten::size#223(%[./435])
    %[./302] : torch.int32() = ./aten::Int#225()
    %[./304] : 19 = ./aten::size#227(%[./435])
    %[./306] : torch.int32() = ./aten::Int#229()
    %[./310] : torch.float32(1, 3, 85, 19, 19) = ./aten::view#233(%[./435])
    %[./317] : torch.float32(1, 3, 19, 19, 85) = ./aten::permute#240(%[./310])
    %[./319] : torch.float32(1, 3, 19, 19, 85) = ./aten::contiguous#242(%[./317])
    %[./353] : torch.float32(1, 3, 19, 19, 85) = ./aten::sigmoid#244(%[./319])
    %[./358] : torch.float32(1, 3, 19, 19, 2) = ./aten::slice#249(%[./353])
    %[./360] : torch.float32(1, 3, 19, 19, 2) = ./aten::mul#251(%[./358])
    %[./363] : torch.float32(1, 3, 19, 19, 2) = ./aten::sub#254(%[./360])
    %[./365] : torch.float32(1, 3, 19, 19, 2) = ./aten::add#256(%[./363])
    %[./368] : torch.float32() = ./aten::select#259()
    %[./369] : torch.float32(1, 3, 19, 19, 2) = ./aten::mul#260(%[./365], %[./368])
    %[./374] : torch.float32(1, 3, 19, 19, 2) = ./aten::slice#265(%[./353])
    %[./380] : torch.float32(3, 19, 19, 2) = ./aten::view#271(%[./369])
    %[./388] : torch.float32(1, 3, 19, 19, 2) = ./aten::expand#279(%[./380])
    %[./390] : torch.float32(1, 3, 19, 19, 2) = ./aten::copy_#281(%[./374], %[./388])
    %[./395] : torch.float32(1, 3, 19, 19, 2) = ./aten::slice#286(%[./353])
    %[./397] : torch.float32(1, 3, 19, 19, 2) = ./aten::mul#288(%[./395])
    %[./399] : torch.float32(1, 3, 19, 19, 2) = ./aten::pow#290(%[./397])
    %[./402] : torch.float32(1, 3, 1, 1, 2) = ./aten::select#293()
    %[./403] : torch.float32(1, 3, 19, 19, 2) = ./aten::mul#294(%[./399], %[./402])
    %[./408] : torch.float32(1, 3, 19, 19, 2) = ./aten::slice#299(%[./353])
    %[./414] : torch.float32(3, 19, 19, 2) = ./aten::view#305(%[./403])
    %[./422] : torch.float32(1, 3, 19, 19, 2) = ./aten::expand#313(%[./414])
    %[./424] : torch.float32(1, 3, 19, 19, 2) = ./aten::copy_#315(%[./408], %[./422])
    %[./428] : torch.float32(1, 1083, 85) = ./aten::view#319(%[./353])
    %[./431] : torch.float32(1, 22743, 85) = ./aten::cat#322()
    %[4239] : (torch.float32(1, 3, 76, 76, 85), torch.float32(1, 3, 38, 38, 85), torch.float32(1, 3, 19, 19, 85), torch.float32(1, 22743, 85)) = ./prim::TupleConstruct#323(%[./38], %[./179], %[./319], %[./431])
  %[4211] : torch.float32(1, 3, 76, 76, 85), %[4212] : torch.float32(1, 3, 38, 38, 85), %[4213] : torch.float32(1, 3, 19, 19, 85), %[4214] : torch.float32(1, 22743, 85) = prim::TupleUnpack#75(%[4239])
  %[3088] : [torch.float32(1, 3, 76, 76, 85), torch.float32(1, 3, 38, 38, 85), torch.float32(1, 3, 19, 19, 85)] = prim::ListConstruct#76(%[4211], %[4212], %[4213])
  %[3089] : (torch.float32(1, 22743, 85), [torch.float32(1, 3, 76, 76, 85), torch.float32(1, 3, 38, 38, 85), torch.float32(1, 3, 19, 19, 85)]) = prim::TupleConstruct#77(%[4214], %[3088])
  return(%[3089] : (torch.float32(1, 22743, 85), [torch.float32(1, 3, 76, 76, 85), torch.float32(1, 3, 38, 38, 85), torch.float32(1, 3, 19, 19, 85)]))
; falling back to native python function call
ERROR:Neuron:3107
Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch_neuron/convert.py", line 448, in _convert_item
    item, inputs, compiler_workdir=sg_workdir, **kwargs)
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch_neuron/decorators.py", line 194, in trace
    return create_runnable(metaneff, neff_ts, jit_trace, example_inputs, preprocessor, postprocessor, output_tensors)
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch_neuron/decorators.py", line 313, in create_runnable
    neuron_trace = torch.jit.trace(neuron_function, example_inputs)
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch/jit/_trace.py", line 779, in trace
    name, func, example_inputs, var_lookup_fn, strict, _force_outplace
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch_neuron/decorators.py", line 312, in neuron_function
    return postprocessor(output_tensors)
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch_neuron/decorators.py", line 1129, in __call__
    for value in node.inputs()]
  File "/home/ubuntu/anaconda3/envs/aws_neuron_pytorch_p36/lib/python3.6/site-packages/torch_neuron/decorators.py", line 1129, in <listcomp>
    for value in node.inputs()]
KeyError: 3107
INFO:Neuron:Number of arithmetic operators (post-compilation) before = 304, compiled = 0, percent compiled = 0.0%
INFO:Neuron:The neuron partitioner created 1 sub-graphs
INFO:Neuron:Neuron successfully compiled 0 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 0.0%
INFO:Neuron:Compiled these operators (and operator counts) to Neuron:
INFO:Neuron:Not compiled operators (and operator counts) to Neuron:
INFO:Neuron: => aten::Int: 12 [supported]
INFO:Neuron: => aten::_convolution: 86 [supported]
INFO:Neuron: => aten::add: 17 [supported]
INFO:Neuron: => aten::cat: 15 [supported]
INFO:Neuron: => aten::contiguous: 3 [supported]
INFO:Neuron: => aten::copy_: 6 [supported]
INFO:Neuron: => aten::expand: 6 [supported]
INFO:Neuron: => aten::max_pool2d: 3 [supported]
INFO:Neuron: => aten::mul: 12 [supported]
INFO:Neuron: => aten::permute: 3 [supported]
INFO:Neuron: => aten::pow: 3 [supported]
INFO:Neuron: => aten::select: 6 [supported]
INFO:Neuron: => aten::sigmoid: 3 [supported]
INFO:Neuron: => aten::silu: 83 [supported]
INFO:Neuron: => aten::size: 9 [supported]
INFO:Neuron: => aten::slice: 20 [supported]
INFO:Neuron: => aten::sub: 3 [supported]
INFO:Neuron: => aten::upsample_nearest2d: 2 [supported]
INFO:Neuron: => aten::view: 12 [supported]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions