In-Memory Computing vs. In-Data Computing

In-Memory Computing In-Data Computing
Take advantage of memory (RAM) speed. Take advantage of vast address space and memory speed.
Data are loaded and kept in volatile memory for computing. All data are kept in infinite and persistent memory space, ready for computing.
Scaling-up - improve performance by adding more RAM. Strong locality – improve performance in cache miss rate and paging.
Assume data could be larger than memory. Assume memory space is sufficient to hold all data.
Different data images in memory and storage. Same data images in memory and storage.
  

Quantitative Change Leads to Qualitative Transformation

With addressable memory space from 232 to 264, we redesign algorithms to reduce the time complexity and improve efficiency.

  

Push Down Logic

The key idea behind the optimization is “push-down logic”. This approach works by pushing code down to where data is stored and reducing the amount of data extracted from the data space for computing.

Depending on the size of the physical memory, most today's analytics solutions perform operations based on the mechanisms of data fetching from the storage/database to memory, and then the computation is executed on the data that is loaded in memory. BigObject is an infinite and persistent memory space for data and code, and operates computations in the place where the data resides. The result is lightening quick analysis with less hardware resource.