Bayerisches Chip-Design-Center (BCDC)
Participants:
1. Dr.John Reuben
2. Prof.Dietmar Fey
The movement of data between processing and memory units in present day computing systems is their main performance and energy-efficiency bottleneck, often referred to as the ‘von Neumann bottleneck’ or ‘memory wall’. As technology scales down, it is evident that ‘data movement energy’ dominates the ‘computation energy’ i.e. the computation in itself consumes only a small fraction of the energy. There has been an ongoing effort to combat the memory wall by bringing the processor and memory unit closer to each other. The emergence of Non-Volatile Memory (NVM) technologies like Resistive RAM (RRAM), Phase Change Memory (PCM) and Spin Transfer Torque-Magnetic RAM (STT-MRAM) has created opportunities to overcome the memory wall by enabling one to compute not just near data, but at the residence of data. The focus of this project is to investigate ways to implement Convolutional Neural Networks (CNN) in Resistive RAM. Vector-Matrix Multiplication (VMM) is the fundamental and frequently required computation in inference of Neural Networks (NN). Although many works/research groups have attempted to perform VMM in memory, there remains two problems with existing architectures. First, ADC (Analog-to-Digital Converter) consume huge energy and occupy large area of the memory array. It is estimated that ADCs consume ≈ 60-85%
of the total energy and 70-90% of the total area of a VMM core based on NVM arrays. Secondly, programming the memory cell to multiple states is a challenge in most non-volatile (Flash, ReRAM, FeFET) memory technology. In this project, we explore ways to overcome these two challenges and propose alternative architectures to perform VMM in a fast and energy-efficient manner in the memory array.
Published works/Preprints/work-in-progress:
[1] John Reuben, Felix Zeller, Benjamin Seiler and Dietmar Fey, „A Multiplier-less Architecture for Image Convolution in Memory“, JLPEA, MDPI, October 2025
[2] Felix Zeller, John Reuben and Dietmar Fey, „Multiplier-free In-Memory Vector-Matrix Multiplication Using Distributed Arithmetic „, Techarxiv, October 2025
Masters Thesis available:
Students interested in In-memory Computer, AI hardware, Neural Network Accelerator and willing to do Circuit Design can contact Dr.John Reuben for interesting Masterarbeit.