To read this content please select one of the options below:

A review of in-memory computing for machine learning: architectures, options

Vaclav Snasel (Faculty of Electrical Engineering and Computer Science, VSB–Technical University of Ostrava, Ostrava, Czech Republic)
Tran Khanh Dang (Faculty of Information Technology, Ho Chi Minh City University of Food Industry, Ho Chi Minh City, Vietnam)
Josef Kueng (Department of Computer Science, Johannes Kepler University Linz, Austria, Linz, Austria)
Lingping Kong (Faculty of Electrical Engineering and Computer Science, VSB–Technical University of Ostrava, Ostrava, Czech Republic)

International Journal of Web Information Systems

ISSN: 1744-0084

Article publication date: 22 December 2023

Issue publication date: 5 February 2024

119

Abstract

Purpose

This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.

Design/methodology/approach

Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.

Findings

ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.

Originality/value

IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.

Keywords

Acknowledgements

The authors gratefully acknowledge financial support DST/INT/Czech/P-12/2019, reg.no.LTAIN19176 by the Czech Republic Ministry of Education, Youth and Sports in the project “Metaheuristics Framework for Multiobjective Combinatorial Optimization Problems (META MO-COP)”.

Citation

Snasel, V., Dang, T.K., Kueng, J. and Kong, L. (2024), "A review of in-memory computing for machine learning: architectures, options", International Journal of Web Information Systems, Vol. 20 No. 1, pp. 24-47. https://doi.org/10.1108/IJWIS-08-2023-0131

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles