* Copyright (C) 2010-2018 Arm Limited or its affiliates. All rights reserved. * SPDX-License-Identifier: Apache-2.0 * Licensed under the Apache License, Version 2.0 ...
Abstract: Recently, transformers have garnered significant attention due to their exceptional capability to capture long-range dependencies in data. A critical factor contributing to their superior ...
In this paper, we tackle the high computational overhead of transformers for lightweight image super-resolution. (SR). Motivated by the observations of self-attention's inter-layer repetition, we ...
Visual Attention Networks (VANs) leveraging Large Kernel Attention (LKA) have demonstrated remarkable performance in diverse computer vision tasks, often outperforming Vision Transformers (ViTs) in ...
1 College of Information Engineering, Xinchuang Software Industry Base, Yancheng Teachers University, Yancheng, China. 2 Yancheng Agricultural College, Yancheng, China. Convolutional auto-encoders ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results