- Dept Admin
- Chair's message
- Outreach and clubs
Conventional computers store information in volatile — random-access memory — and non-volatile — hard drives and solid-state drives — memories. A central processing unit (CPU) then sequentially processes the data. This mode of operation requires a significant amount of information transfer to and from the CPU and the memories. This necessarily imposes limits on the performance and scalability of the architecture. A significant improvement in computing performance therefore requires a fundamental change in approach, moving from the well-established von Neumann architecture (or those based on it) to novel and efficient massively parallel computing schemes, which would most likely take advantage of non-traditional electronic devices.
I will discuss the implementation of a novel approach to computing named memcomputing  inspired by the operation of our own brain. Memcomputing — computing using memory circuit elements or memelements — satisfies important requirements: (i) it is intrinsically massively parallel, (ii) its information-storing and computing units are physically the same, and (iii) it does not rely on active elements as the main tools of operation. I will discuss the various possibilities offered by memcomputing, the criteria that need to be satisfied to realize this paradigm, and provide several examples showing the massively-parallel solution of optimization problems.
 M. Di Ventra and Y.V. Pershin, Nature Physics, 9, 200 (2013).