Shady Agwa

Research Fellow

DiP: A Scalable, Energy-Efficient Systolic Array for Matrix Multiplication Acceleration


Journal article


Ahmed J. Abdelmaksoud, Shady O. Agwa, T. Prodromakis
arXiv.org, 2024

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Abdelmaksoud, A. J., Agwa, S. O., & Prodromakis, T. (2024). DiP: A Scalable, Energy-Efficient Systolic Array for Matrix Multiplication Acceleration. ArXiv.org.


Chicago/Turabian   Click to copy
Abdelmaksoud, Ahmed J., Shady O. Agwa, and T. Prodromakis. “DiP: A Scalable, Energy-Efficient Systolic Array for Matrix Multiplication Acceleration.” arXiv.org (2024).


MLA   Click to copy
Abdelmaksoud, Ahmed J., et al. “DiP: A Scalable, Energy-Efficient Systolic Array for Matrix Multiplication Acceleration.” ArXiv.org, 2024.


BibTeX   Click to copy

@article{ahmed2024a,
  title = {DiP: A Scalable, Energy-Efficient Systolic Array for Matrix Multiplication Acceleration},
  year = {2024},
  journal = {arXiv.org},
  author = {Abdelmaksoud, Ahmed J. and Agwa, Shady O. and Prodromakis, T.}
}

Abstract

Transformers are gaining increasing attention across different application domains due to their outstanding accuracy. However, these data-intensive models add significant performance demands to the existing computing architectures. Systolic arrays are spatial architectures that have been adopted by commercial AI computing platforms (like Google TPUs), due to their energy-efficient approach of data-reusability. However, these spatial architectures face a penalty in throughput and energy efficiency due to the need for input and output synchronization using First-In-First-Out (FIFO) buffers. This paper proposes a novel scalable systolic-array architecture featuring Diagonal-Input and Permutated weight-stationary (DiP) dataflow for the acceleration of matrix multiplication. The proposed architecture eliminates the synchronization FIFOs required by state-of-the-art weight stationary systolic arrays. Aside from the area, power, and energy savings achieved by eliminating these FIFOs, DiP architecture maximizes the computational resources (PEs) utilization. Thus, it outperforms the weight-stationary counterparts in terms of throughput by up to 50%. A comprehensive hardware design space exploration is demonstrated using commercial 22nm technology, highlighting the scalability advantages of DiP over the conventional approach across various dimensions where DiP offers improvement of energy efficiency per area up to 2.02x. Furthermore, DiP is evaluated using various transformer workloads from widely-used models, consistently outperforming TPU-like architectures, achieving energy improvements of up to 1.81x and latency improvements of up to 1.49x across a range of transformer workloads. At a 64x64 size with 4096 PEs, DiP achieves a peak performance of 8.2 TOPS with energy efficiency 9.55 TOPS/W.