Home
Call For Papers
Submission
Author
Registration
Publications
About
Contact Us

  Comparative Analysis of Dataflow Engines and Conventional CPUs in Data-intensive Applications  
  Authors : Abdulkadir Dauda; Haruna Umar Adoga; John Francis Ogbonoko
  Cite as:

 

High-performance systems are a vital tool for supporting the rapid developments being recorded in software technologies. The recent innovations in software systems have changed the way we see and deal with our physical world. Many applications today, as part of their functions, implement highly data-intensive algorithms such as machine learning, graphics processing, and scientific calculations which require high processing power to deliver acceptable performance. The Central Processing Unit (CPU)-based architectures, which have been used over the years, are not coping well with these classes of applications. This has led to the emergence of a new set of architectures such as Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). These architectures are based on the computing paradigm referred to as dataflow computing in contrast to the popular control-flow computing. In this research, we used the dataflow engines developed by Maxeler Technologies, to compare performance with conventional CPUbased parallel systems by writing a program each for the two platforms, which solves a typical vector operation, and run it on the two platforms. The results of the experiment show that even though the Dataflow Engines (DFEs) we used in our experiments run at the clock frequency of 100MHz, its performance is at par with a quad core CPU which runs at 1.86GHz per core.

 

Published In : IJCSN Journal Volume 7, Issue 4

Date of Publication : August 2018

Pages : 263-271

Figures :06

Tables : 02

 

Abdulkadir Dauda : was born in Lafia, Nasarawa State of Nigeria on the 25th of October 1982. He obtained the Bachelor of Science degree in Computer Science from the Usmanu Danfodiyo University Sokoto, Nigeria in 2006. He worked with the Nigerian Judiciary as a Programme Analyst from February 2009 to April 2014 when he joined the Federal University Lafia as a Graduate Assistant. In 2015, he proceeded to the University of Bedfordshire, United Kingdom for his Masters of Science Degree which he completed in January 2017. He currently works as an Assistant Lecturer in the department of Computer Science, Federal University Lafia, Nigeria. His research interests are in the area of High-Performance Computing and Distributed Systems.

Adoga, H. U. : holds a Bachelor of Engineering (B.Eng.) in Electrical & Electronics Engineering from the University of Maiduguri, Nigeria, with specialization in data communications and networks. He also holds a Master of Science (MSc.) degree in Computer Science, from the University of Hertfordshire, England. He is currently a lecturer with the Department of Computer Science, Federal University Lafia, Nigeria. His research interests are in the areas of Software Defined Networking (SDN), IOT and Distributed Systems. He is a registered member of the Institute of Electrical and Electronics Engineers (IEEE), the Nigeria Computer Society (NCS), and the Nigeria Society of Engineers (NSE). As a CCNP professional, Haruna is also fascinated by design and configuration of computer networks.

Ogbonoko, J. F. : holds a Bachelor of Science (BSc.) degree in Computer Science from the Benue State University, Makurdi, Nigeria. He also holds a Master of Science (MSc.) degree in Software Systems and Internet Technology from the University of Sheffield, United Kingdom. He currently lectures in the Department of Computer Science, Federal University Lafia, Nasarawa State, Nigeria. His research interests are in the area of Software Engineering, Internet of Things, and Big Data.

 

HPC, Parallel Systems, Control-flow Computing, Dataflow Computing, FPGAs, DFEs, Maxeler Technologies

In this work, we highlighted some of the important advancements in the area of high performance computing. We discussed the different architectures of the CPU-based systems and the incredible performance that is gained over single core processors by joining multiple cores together. We also pointed out some of the disadvantages, such as heat generation and power consumption and the associated costs, of the continued increase in the number of processing units in that manner. As an alternative to control flow computing, we discussed dataflow computing and how it can be used to accelerate certain classes of applications which are characterized as SIMD. As the main aim of our research is to compare performance, we implemented a vector operation on both CPU-based parallel systems and Maxeler dataflow engine and used it for the experiment. The outcome of our experiments shows that the DFEs perform better even though they run at a lower clock frequency than the CPUs.

 

[1] Hennessy, J.L. and Patterson, D.A., 2011. Computer architecture: a quantitative approach. Elsevier. [2] Pell, O. and Mencer, O., 2011. Surviving the end of frequency scaling with reconfigurable dataflow computing. ACM SIGARCH Computer Architecture News, 39(4), pp.60-65. [3] Flynn, M.J., Pell, O. and Mencer, O., 2012, August. Dataflow supercomputing. In 22nd International Conference on Field Programmable Logic and Applications (FPL) (pp. 1-3). IEEE. [4] Feng, W.C., Feng, X. and Ge, R., 2008. Green supercomputing comes of age. IT professional, 10(1), pp.17-23. [5] Mahapatra, N.R. and Venkatrao, B., 1999. The processor-memory bottleneck: problems and solutions. Crossroads, 5(3es), p.2. [6] Feng, W.C. and Cameron, K., 2007. The green500 list: Encouraging sustainable supercomputing. Computer, 40(12), pp.50-55. [7] Gustafson, J.L., 1988. Reevaluating Amdahl's law. Communications of the ACM, 31(5), pp.532-533. [8] Sundararajan, P., 2010. High performance computing using FPGAs. Xilinx White Paper: FPGAs, pp.1-15. [9] Milutinovic, V., Salom, J., Trifunovic, N. and Giorgi, R., 2015. Guide to DataFlow Supercomputing: Basic Concepts, Case Studies, and a Detailed Example. Springer. [10] Akhter, S. and Roberts, J., 2006. Multi-core programming (Vol. 33). Hillsboro: Intel press. [11] Terboven, C., Schmidl, D., Jin, H. and Reichstein, T., 2008, May. Data and thread affinity in openmp programs. In Proceedings of the 2008 workshop on Memory access on future processors: a solved problem? (pp. 377-384). ACM. [12] Jin, H., Jespersen, D., Mehrotra, P., Biswas, R., Huang, L. and Chapman, B., 2011. High performance computing using MPI and OpenMP on multi-core parallel systems. Parallel Computing, 37(9), pp.562- 575. [13] Chalamalasetti, S., Margala, M., Vanderbauwhede, W., Wright, M. and Ranganathan, P., 2012, April. Evaluating FPGA-acceleration for real-time unstructured search. In Performance Analysis of Systems and Software (ISPASS), 2012 IEEE International Symposium on (pp. 200-209). IEEE. [14] Lee, B. and Hurson, A.R., 1993. Issues in dataflow computing. Advances in computers, 37, pp.285-333. [15] Gao, S. and Chritz, J., 2014, December. Characterization of OpenCL on a scalable FPGA architecture. In 2014 International Conference on ReConFigurable Computing and FPGAs (ReConFig14) (pp. 1-6). IEEE. [16] Maxeler Technologies, 2013. "MaxCompiler white paper." [Online]. Available from: https://www.maxeler.com/media/documents/MaxelerW hitePaperProgramming.pdf [Accessed: 25th October, 2017]. [17] Maxeler Technologies, 2011. "MaxCompiler white paper." [Online]. Available from: http://www.maxeler.com/media/documents/MaxelerWh itePaperMaxCompiler.pdf [Accessed: 24th October, 2017]. [18] Maxeler Technologies, 2016. MPC-X Series. [Online]. Available from: https://www.maxeler.com/products/mpc-xseries/ [Accessed: 28th November, 2017].