Optimizing Binary Neural Network for Resource-Constrained Edge Devices in IoT Applications

Published online: 01/10/2025

Authors

Corressponding author's email:

huanvm@hcmute.edu.vn

DOI:

https://doi.org/10.54644/jte.2025.1756

Keywords:

Binary Neural Network, XNOR-popcout, Edge device, IoT, Resource-constrained hardware

Abstract

The implementation of artificial intelligence models on edge devices is increasingly popular, bringing many values in reducing latency, effectively utilizing bandwidth, improving data security, enhancing privacy and reducing costs for users. However, this work poses many challenges in terms of accuracy, processing speed, hardware resources and model size for devices constrained by limited hardware. Binary Neural Network (BNN) is proposed as a potential solution to reduce resource requirements by using only 1 bit for quantizing. In this study, BNN network is optimized by binary quantizing both weights and activation functions with XNOR-popcout multiplication to optimize BNN network. The results show that BNN network model is lighter in memory footprint when deployed on hardware with limited computational resources, less computational time than conventional BNN network which helps the model execute faster as the network architecture becomes less complex, with acceptable accuracy on two datasets MNIST and Fashion MNIST. The proposed BNN model resul can be deployed on edge devices for IoT applications.

Downloads: 0

Download data is not yet available.

Author Biographies

Van Minh Nguyen, Ho Chi Minh City University of Technology and Education, Vietnam

Van Minh Nguyen holds a strong academic and professional background in the fields of automation and manufacturing technology. He earned his Engineer Degree in 2009, majoring in Automation Technology from Ho Chi Minh City University of Technical Education. Building on this foundation, he completed his Master’s Degree in Mechanical Engineering in 2011 at Southern Taiwan University, Taiwan. His research interests and expertise include Computer Numerical Control (CNC), Electrical Discharge Machining (EDM), and Computer Integrated Manufacturing (CIM), with a focus on the integration of advanced machining technologies and intelligent manufacturing systems.

Email: minhngv@hcmute.edu.vn. ORCID:  https://orcid.org/0009-0009-2454-2764.

Tien Tu Ngo, Kookmin University, Korea

Tien Tu Ngo received a B.S. degree in Computer Engineering Technology from HCMC University of Technology and Education, Vietnam, in 2024. He is pursuing a combined master's and PhD program at Kookmin University, Korea. His research interests include memristor-based circuits and architectures, neuromorphic computing systems, brain-inspired computing, and artificial intelligence. He can be contacted at:  

Email: ttn.twenty.oh.two@gmail.com. ORCID:  https://orcid.org/0009-0000-0554-585X.

Tien Dung Tran, Le Hong Phong High School for The Gifted, Vietnam

Tien Dung Tran is graduated from Le Hong Phong High School for the Gifted and an active member of the Smart Integrated System Solutions Lab. Passionate about researching smart devices and IoT-based solutions for smart home applications.

Email: dung.s011249@ilamail.edu.vn. ORCID:  https://orcid.org/0009-0003-7432-1480.

Minh Tam Nguyen, Ho Chi Minh City University of Technology and Education, Vietnam

Minh Tam Nguyen received his Bachelor of Electrical Engineering and Power Supply from Ho Chi Minh City University of Technology and Education in 1995, his Master's degree in Electrical Engineering from Ho Chi Minh City University of Technology, Vietnam National University in 2003, and his PhD in Engineering from Sydney University of Technology, Australia in 2010. Dr. Nguyen Minh Tam has been teaching at the Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology and Education since 1995. His main research direction is the application of soft computing techniques in modeling and control.

Email: tamnm@hcmute.edu.vn. ORCID:  https://orcid.org/0009-0000-8230-1373.

Minh Huan Vo, Ho Chi Minh City University of Technology and Education, Vietnam

Minh Huan Vo received the B.S. and M.S.E.E. degrees in Electronics and Communication Engineering from the Ho Chi Minh City University of Technology, Vietnam in 2005 and 2007. and Ph.D. degree in Electronics Engineering from Kookmin University, Seoul, Korea in 2013. He is currently working as an associate professor at the Faculty of Electrical and Electronics Engineering, Ho Chi Minh University of Technology and Education Vietnam. His current research interests include chip design, optimization algorithm, AI, and data analytics.

Email: huanvm@hcmute.edu.vn. ORCID:  https://orcid.org/0000-0002-9990-9331.

References

W. X. Zhao et al., “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023.

X. Zhang et al., “Adaptive precision training: Quantify back propagation in neural networks with fixed-point numbers,” arXiv preprint arXiv:1911.00361, 2019.

S. N. Truong, “A low-cost artificial neural network model for Raspberry Pi,” Eng. Technol. Appl. Sci. Res., vol. 10, 2020.

S. N. Truong, “A ternary neural network with compressed quantized weight matrix for low power embedded systems,” Eng. Technol. Appl. Sci. Res., vol. 12, pp. 8311–8315, 2022.

V. Mehlin, S. Schacht, and C. Lanquillon, “Towards energy-efficient deep learning: An overview of energy-efficient approaches along the deep learning lifecycle,” arXiv preprint arXiv:2303.01980, 2023.

Y. Li, W. Ding, C. Liu, B. Zhang, and G. Guo, “TRQ: Ternary neural networks with residual quantization,” Proc. AAAI Conf. Artif. Intell., vol. 35, no. 10, pp. 8538–8546, May 2021.

I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized neural networks,” Adv. Neural Inf. Process. Syst., vol. 29, 2016.

I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Quantized neural networks: Training neural networks with low precision weights and activations,” J. Mach. Learn. Res., vol. 18, pp. 1–30, 2018.

M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “XNOR-Net: ImageNet classification using binary convolutional neural networks,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2016, pp. 525–542.

Y. Guo, “A survey on methods and theories of quantized neural networks,” arXiv preprint arXiv:1808.04752, 2018.

X. Yao, “A review of evolutionary artificial neural networks,” Int. J. Intell. Syst., vol. 8, pp. 539–567, 1993.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, pp. 2278–2324, 1998.

W. Rawat and Z. Wang, “Deep convolutional neural networks for image classification: A comprehensive review,” Neural Comput., vol. 29, pp. 2352–2449, 2017.

S. Agrawal, V. Rewaskar, R. Agrawal, S. Chaudhari, Y. Patil, and N. Agrawal, “Advancements in NSFW content detection: A comprehensive review of ResNet-50 based approaches,” Int. J. Intell. Syst. Appl. Eng., vol. 11, pp. 41–45, 2023.

M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or −1,” arXiv preprint arXiv:1602.02830, 2016.

T. Ma, Z. Li, Q. Li, H. Liu, Z. Zhao, and Y. Wang, “FPGA optimized architecture of XNOR-POPCOUNT,” in Proc. Int. Conf. Comput., Commun., Percept. Quantum Technol. (CCPQT), Xiamen, China, 2023, pp. 235–239, doi: 10.1109/CCPQT60491.2023.00046.

P. V. Khoa, T. N. Quang, and N. N. Lam, “Optimizing the convolutional neural networks for resource-constraint hardwares,” J. Intell. Syst. Appl. Eng., vol. 5, no. 1, pp. 1332–1341, 2022.

C. Yuan and S. S. Agaian, “A comprehensive review of binary neural network,” Artif. Intell. Rev., vol. 56, pp. 12949–13013, 2023.

Downloads

Published

01-10-2025

How to Cite

Nguyen, V. M., Ngo, T. T., Tran, T. D., Nguyen, M. T., & Vo, M. H. (2025). Optimizing Binary Neural Network for Resource-Constrained Edge Devices in IoT Applications: Published online: 01/10/2025. Journal of Technical Education Science. https://doi.org/10.54644/jte.2025.1756

Most read articles by the same author(s)

1 2 3 > >>