Document Type
Thesis
Date of Award
12-31-2023
Degree Name
Master of Science in Electrical Engineering - (M.S.)
Department
Electrical and Computer Engineering
First Advisor
Shaahin Angizi
Second Advisor
Durgamadhab Misra
Third Advisor
Cong Wang
Abstract
Optimizing computational power is critical in the age of data-intensive applications and Artificial Intelligence (AI)/Machine Learning (ML). While facing challenging bottlenecks, conventional Von-Neumann architecture with implementing such huge tasks looks seemingly impossible. Hardware Accelerators are critical in efficiently deploying these technologies and have been vastly explored in edge devices. This study explores a state-of-the-art hardware accelerator; Gemmini is studied; we leveraged the open-sourced tool. Furthermore, we developed a Hardware Accelerator in the study we compared with the Non-Von-Neumann architecture. Gemmini is renowned for efficient matrix multiplication, but configuring it for specific tasks requires manual effort and expertise. We propose implementing it by reducing manual intervention and domain expertise, making it easy to develop and deploy hardware accelerators that are time-consuming and need expertise in the field; by leveraging the Large Language Models (LLMs), they enable data-informed decision-making, enhancing performance. This work introduces an innovative method for hardware accelerator generation by undertaking the Gemmini to generate optimizing hardware accelerators for AI/ML applications and paving the way for automation and customization in the field.
Recommended Citation
Vungarala, Durga Lakshmi Venkata Deepak, "Gen-acceleration: Pioneering work for hardware accelerator generation using large language models" (2023). Theses. 2376.
https://digitalcommons.njit.edu/theses/2376
Included in
Computer and Systems Architecture Commons, Digital Circuits Commons, VLSI and Circuits, Embedded and Hardware Systems Commons