Charlotte Frenkel

Assistant Professor

Delft University of Technology

Mekelweg 4, 2628 CD Delft
The Netherlands
c (dot) frenkel (at) tudelft (dot) nl
Twitter Account LinkedIn Account ResearchGate Account OrcID Account Github Account Google Scholar Account

News

Projects

Publications

Awards

Career

Service

Whether inspired by biological intelligence or by artificial intelligence (AI) based on machine-learning techniques, the development of low-cost smart devices at the edge is a key stepping stone toward a distributed, off-cloud, always-on, and ambient form of adaptive processing. On the one hand, AI processor design aims at leveraging the successes of artificial neural networks (ANNs) to achieve best-of-class accuracy on specific tasks (top-down). However, current AI processors do not yet have the energy efficiency and versatility of biological neural networks. On the other hand, the field of neuromorphic engineering aims at replicating biological intelligence in silicon (bottom-up). Compared to conventional von-Neumann processors, a paradigm shift is implied for (i) the data representation, from a clocked multi-bit encoding to sparse space- and time-encoded binary spike events, and (ii) the architecture, from separated processing and memory to co-located neurons and synapses. Tackling efficiently this two-fold paradigm shift is still an open challenge, as highlighted by the diversity of approaches adopted worldwide for neuromorphic integrated circuit (IC) design. Indeed, all circuit design styles are being explored: asynchronous and synchronous digital, sub- and above-threshold analog, and mixed-signal. Yet, no clear trend emerges and recent calls from industry and academia stress a need for consolidating the field in a clear direction (e.g., see [Davies, Nat. Mach. Intel., 2019]).

I am a digital IC designer whose curiosity got trapped into the realms of neuromorphic engineering. I am working on aspects ranging from circuit design techniques (both digital and mixed-signal with emerging devices) to computer architecture, learning algorithms, and neuroscience. My research goals are

To achieve these goals, I am investigating both the bottom-up and the top-down design approaches, as well as their synergies.

News

Projects

Below is a summary of my main projects. It covers ICs, algorithms, and the neuromorphic intelligence framework. For the associated publications and open-source repositories, see Publications.

Chip gallery

The ODIN neuromorphic processor (2016-2020)

The ODIN 256-neuron 64k-synapse neuromorphic processor (28-nm CMOS) highlights how design constraints on the synapses can be released by offloading most synaptic computations at the neuron level. All synapses embed spike-driven synaptic plasticity (SDSP), while neurons are able to phenomenologically reproduce the 20 Izhikevich behaviors of cortical spiking neurons. At the time of publication, ODIN demonstrated the highest neuron and synapse densities, and the lowest energy per synaptic operation among digital designs.

Synapse block (ISCAS'17)   Neuron block (BioCAS'17)

Chip (Trans. BioCAS'19)   EMG classif. (Front. Neur.'20)

Open-source HW

The MorphIC neuromorphic processor (2017-2020)

The 2k-neuron 2M-synapse quad-core MorphIC SNN processor (65-nm CMOS) extends the ODIN core for large-scale integration with (i) a stochastic synaptic plasticity rule for online learning with high-density binary synapses, and (ii) a hierarchical spike routing network-on-chip (NoC) combining local crossbar, inter-core tree-based and inter-chip mesh-based routing, which achieves biologically-plausible neuron fan-in and fan-out values of 1k and 2k. Considering technology-normalized numbers, MorphIC further improves the density claim of ODIN.

Chip (ISCAS'19)   Chip (Trans. BioCAS'19)

DVS classif. (Front. Neur.'20)

The SPOON event-driven CNN (2019-ongoing)

SPOON exploits input temporal coding to take advantage of the sparsity of spike-based image sensors for low-power always-on edge detection. Event-driven and frame-based computation are combined to maximize data reuse. SPOON demonstrates, for the first time, that a neuromorphic processor can reach a competitive accuracy-efficiency tradeoff compared to conventional task-specific ANN accelerators. It also embeds an optimized on-chip implementation of our DRTP algorithm, thereby providing on-the-fly adaptation to new features in incoming data at only ~15-% power and area overheads. These key results led to a best paper award at ISCAS 2020. The chip fabricated in 28-nm CMOS has since then been functionally validated, a journal extension is in preparation.

Pre-silicon results (ISCAS'20)

The ReckOn spiking recurrent neural network (2020-ongoing)

ReckOn demonstrates, for the first time, end-to-end on-chip learning over second-long timescales (no external memory accesses, no pre-training). It is based on a bio-inspired alternative to backpropagation through time (BPTT), the e-prop training algorithm, which has been modified to reduce the memory overhead required for training to only 0.8% of the equivalent inference-only design. This allows for a low-cost solution with a 0.45-mm² core area and a <50-µW power budget at 0.5V for real-time learning in 28-nm FDSOI CMOS, which is suitable for an always-on deployment at the extreme edge. Furthermore, similarly to the brain, ReckOn exploits the sensor-agnostic property of spike-based information. Combined with code-agnostic e-prop-based training, this leads to a task-agnostic learning chip that is demonstrated on vision, audition and navigation tasks.

Chip (ISSCC'22)   Open-source HW

Algorithms

Direct Random Target Projection (2018-2021)

The computational and memory costs of neural network training should be minimized for adaptive edge computing. The two key constraints that preclude the standard backpropagation of error (BP) algorithm from being both hardware-efficient and biophysically-plausible are the weight transport problem and update locking. Based on the concept of feedback alignment, we propose the direct random target projection (DRTP) algorithm, which is purely feedforward and relies only on local gradient and weight information. It releases the two key issues of BP without compromising the training performance for tasks whose complexity is suitable for processing at the very edge.

Algorithm (Front. Neur.'21)   Open-source code

Neuromorphic intelligence framework

First steps toward a framework for neuromorphic intelligence were taken in my PhD thesis, where bottom-up and top-down investigations allowed identifying guidelines for the design of efficient neuromorphic hardware, highlighting that each approach can naturally act as a guide to address the shortcomings of the other. These results were then taken a step further with an extensive review paper, whose preprint was just released.

PhD thesis   Neuromorphic Intelligence (arXiv'21)

Selected publications and talks

For a full publication list together with citation data, please refer to my Google Scholar profile. My main invited talks are also listed at the end of this section.

Journal papers

Conference papers

Preprints

Invited talks

Main Awards

Academic Career and Education

Academic Career

Education

Academic Service

Most of my reviewing activity is also summarized on my Publons profile.