Skip to main content

Deep Learning Hardware Accelerator (DLHA)

The DLHA uses an FPGA to accelerate the convolution function of the neural network MobileNetV2. The neural network is first compiled and the quantized using Glow. This allows it to run on a small device like the Zedboard.

Team Members: 

Cristian Ascencio

Nicholas Dao

Jonico Eustaquio

Jost Luebbe

Alex Stahl

Vinh Tran

Isabelle Villamiel