publications
2025
- ConferenceRadio Fingerprinting of Wi-Fi Devices Through MIMO Compressed Channel FeedbackFrancesca Meneghello, Khandaker Foysal Haque, and Francesco RestucciaIn IEEE INFOCOM 2025-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2025
In this paper, we present DeepCSIv2, a data-driven radio fingerprinting (RFP) algorithm to characterize Wi-Fi devices acting as stations (STAs) at the physical layer. Our approach relies on STA-specific hardware impairments extracted from the multiple-input, multiple-output (MIMO) channel state information (CSI) feedback transmitted unencrypted by the STAs to Wi-Fi access points (APs) to establish MIMO transmissions. Recent work showed that such feedback can be effectively used to fingerprint devices acting as APs. In this work, we demonstrate that the same control information can be leveraged to obtain relevant features describing the STAs. Our intuition is that even tiny STA-specific hardware characteristics introduce detectable impairments in the channel estimated by the STA, and percolate in the CSI feedback – consisting of the compressed and quantized channel estimate. DeepCSIv2 is based on a neural network architecture that automatically extracts the STA’s radio fingerprint from the feedback captured over the air and identifies the device. We evaluated DeepCSIv2 through an extensive data collection campaign using 18 commercial IEEE 802.11ax network interface cards (NICs) of the same type from the same vendor. We considered different experiment configurations, changing the propagation environment and the operational bandwidth. The results show that DeepCSIv2 reaches an accuracy higher than 96% in identifying the 18 NICs in our dataset. We will share the dataset and RFP code with the community for reproducibility.
- ConferencePhyDNNs: Bringing Deep Neural Networks to the Physical LayerMohammad Abdi, Khandaker Foysal Haque, Francesca Meneghello, Jonathan Ashdown, and Francesco RestucciaIn IEEE INFOCOM 2025-IEEE Conference on Computer Communications, 2025
Emerging applications require mobile devices to continuously execute complex deep neural networks (DNNs). While mobile edge computing (MEC) may reduce the computation burden of mobile devices, it exhibits excessive latency as it relies on encapsulating and decapsulating frames through the network protocol stack. To address this issue, we propose PhyDNNs, an approach where DNNs are modified to operate directly at the physical layer (PHY), thus significantly decreasing latency, energy consumption, and network overhead. Conversely from recent work in Joint Source and Channel Coding (JSCC), PhyDNNs adapt already trained DNNs to work at the PHY. To this end, we developed a novel information-theoretical framework to fine-tune PhyDNNs based on the trade-off between commu-nication efficiency and task performance. We have prototyped PhyDNNs with an experimental testbed using a Jetson Orin Nano as the mobile device and two USRP software-defined radios (SDRs) for wireless communication. We evaluated PhyDNNs performance considering various channel conditions, DNN models, and datasets. We also tested PhyDNNs on the Colosseum net-work emulator considering two different propagation scenarios. Experimental results show that PhyDNNs can reduce the end-to-end inference latency, amount of transmitted data, and power consumption by up to 48×, 1385×, and 13 × while keeping the accuracy within 7 % of the state-of-the-art approaches. Moreover, we show that PhyDNNs experience 4.3 times less latency than the most recent JSCC method while incurring in only 1.79% performance loss. For replicability, we shared the source code for the PhyDNNs implementation.
- ConferenceDEER: Simultaneous Multi-Modal Decentralized Energy Efficient Covert RoutingKhandaker Foysal Haque, Justin Kong, Terrence J Moore, Francesco Restuccia, and Fikadu T DagefuIn 2025 IEEE Wireless Communications and Networking Conference (WCNC), 2025
A fundamental challenge in covert routing is that meeting both covertness and throughput requirements often leads to increased transmit power, which can significantly elevate the overall energy consumption of the network. Therefore, it is important to achieve higher throughput and better energy efficiency while maintaining the required covertness. To this end, we propose a novel simultaneous multi-modal Decentralized Energy- Efficient covert Routing approach - DEER for a multi-hop heterogeneous network (HetNet). Unlike the prevailing single-modal approaches, DEER leverages the diversity of the available wireless communication technologies for simultaneous multi-modal routing. DEER aims to minimize the end-to-end total transmit power of the whole route in a decentralized fashion while maintaining the constraints on required throughput and covertness. DEER stems into two main steps: node-level optimization followed by network-level optimization using the proposed custom-tailored Dijkstra’s based link state routing protocol to meet the constraints while minimizing the end-to-end total transmit power. We demonstrate by numerical analysis that DEER improves the energy efficiency by 23.5x and 2.9x times in comparison to the baseline single-modal and naive simultaneous multimodal approaches respectively.
- PreprintDECOR: Multi-Modal Decentralized Cluster-based Energy Efficient Covert Routing in HetNetsKhandaker Foysal Haque, Justin H Kong, Terrence J Moore, Kevin Chan, Francesco Restuccia, and Fikadu T DagefuPreprint, 2025
State-of-the-art covert routing in heterogeneous network (HetNet) focuses on balancing covertness and throughput, but often overlooks explicit energy optimization. While covert communication inherently limits transmit power, meeting throughput demands without coordinated design can still lead to high energy consumption. In contrast, our approach jointly considers covertness, throughput, and energy efficiency, addressing all three objectives simultaneously. To this end, we propose DECOR, a Decentralized Energy-efficient COvert Routing framework that jointly optimizes covertness, throughput, and energy efficiency. Unlike traditional methods that use a single wireless technology, DECOR leverages the diversity of available wireless communication technologies in HetNet to enable simultaneous multi-modal routing. The core idea behind DECOR is that optimal simultaneous utilization of multiple modalities improves throughput and overall energy efficiency. It minimizes the end-to-end energy consumption while satisfying stringent constraints on throughput and covertness through two core steps: (1) link-level optimization using sequential least squares programming (SLSQP), and (2) network-level optimization through a custom cluster-based routing strategy. DECOR introduces a novel clustering-based strategy that aggregates intra-cluster link information and delegates routing decisions to cluster heads, significantly reducing control overhead and enabling scalable, energy-efficient covert communication. Extensive numerical analysis demonstrates that DECOR significantly outperforms existing approaches in terms of energy-efficiency and data overhead.
- JournalBeamsense: Rethinking wireless sensing with mu-mimo wi-fi beamforming feedbackKhandaker Foysal Haque, Milin Zhang, Francesca Meneghello, and Francesco RestucciaComputer Networks, 2025
In this paper, we propose BeamSense, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Existing work leverages the manual extraction of the uncompressed channel state information (CSI) from Wi-Fi chips, which is not supported by the 802.11 standards and hence requires the usage of specialized equipment. On the contrary, BeamSense leverages the standard-compliant compressed beamforming feedback information (BFI) (beamforming feedback angles (BFAs)) to characterize the propagation environment. Conversely from the uncompressed CSI, the compressed BFAs (i) can be recorded without any firmware modification, and (ii) simultaneously captures the channels between the access point and all the stations, thus providing much better sensitivity. BeamSense features a novel cross-domain few-shot learning (FSL) algorithm for human activity recognition to handle unseen environments and subjects with a few additional data samples. We evaluate BeamSense through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We show that our BFAs-based approach achieves about 10% more accuracy when compared to CSI-based prior work, while our FSL strategy improves accuracy by up to 30% when compared with state-of-the-art cross-domain algorithms. Additionally, to demonstrate its versatility, we apply BeamSense to another smart home application – gesture recognition – achieving over 98% accuracy across various orientations and subjects. We share the collected datasets and BeamSense implementation code for reproducibility – https://github.com/kfoysalhaque/BeamSense.
- ConferenceSOAR: Semantic Multi-User MIMO Communications for Reliable Wireless Edge ComputingSharon LG Contreras, Foysal Haque Khandaker, Francesco Restuccia, and Marco LevoratoIn 2025 21st International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), 2025
Ensuring reliability in Mobile Edge Computing Systems (MECS) is challenging due to wireless channel fluctuations, which affect packet delivery rate and real-time task performance. Existing methods, such as packet replication strategies and deep neural network (DNN) splitting, often lack task-specific adaptability, oversimplifying the effects of channel dynamics in its performance. To address this issues, we propose Semantic Offloading through Reliability (SOAR), a task-oriented multi-user MIMO framework for wireless edge computing executing vision tasks, e.g. object detection and image classification. SOAR pipeline uses distributional deep reinforcement learning (DDRL) agents with a multi-branched context-aware neural network. Two neural gates analyze onboard features to identify the context and contextual features, enabling a DDRL agent to optimize resource usage and task-specific packet-loss objectives. We evaluated SOAR in real-world vehicular system under line-of-sight (LoS) and non-line-of-sight (NLoS) propagation scenarios, SOAR reduces resource utilization by 35−40% compared to fixed antenna configuration benchmarks.
- LetterSCOPE: Cooperative Integrated Communications and Sensing for Material Classification at Sub-Terahertz FrequenciesKhandaker Foysal Haque, Xavier Cantos-Roman, Francesca Meneghello, Josep Miquel Jornet, and Francesco RestucciaIEEE Wireless Communications Letters, 2025
In this letter, we propose SCOPE—a novel, entropy-weighted ensembling approach for material classification at sub-Terahertz (THz) frequencies. Unlike existing methods that primarily use dedicated radars, SCOPE builds upon an integrated communication and sensing system and leverages information from both penetrating and reflected signals to enhance spatial resolution and detection accuracy across environments. We adopted spatial variability augmentation (SVA) to address the challenge of generalization across varying transmission distances and antenna gains. While most prior works are limited to radar systems or simulations, SCOPE is implemented and validated in a real sub-THz system working with a 10 GHz bandwidth. Our assessments across different sensing distances, antenna gains, and channel conditions demonstrate the efficacy of SCOPE, which reaches up to 99% accuracy in detecting five materials—glass, wood, metal, air, and plastic—outperforming existing techniques. To facilitate reproducibility, our dataset and code are available at: https://github.com/kfoysalhaque/SCOPE.
- ConferenceMAGIC: Meta-Learning Adaptive Gesture Recognition with mmWave MIMO CSIKhandaker Foysal Haque, KM Rumman, Arman Elyasi, Francesca Meneghello, and Francesco RestucciaIn 2025 IEEE 26th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2025
In this paper, we present MAGIC, a novel approach to gesture recognition utilizing mmWave multiple-input multiple-output (MIMO) Channel State Information (CSI). Unlike existing mmWave gesture recognition methods that often rely on radar signals, MAGIC leverages CSI extracted from mmWave MIMO integrated sensing and communication (ISAC) systems. While advanced radar systems, such as those operating in frequency-modulated continuous wave (FMCW) mode, can achieve high frequency and spatial resolution, they typically require dedicated sensing infrastructure, which increases system complexity. In contrast, MAGIC utilizes high-granular CSI from orthogonal frequency-division multiplexing (OFDM) systems, enabling fine spatial, temporal, and frequency-domain information for robust gesture recognition. This eliminates the need for dedicated radar transceivers, simplifying the system and reducing transmission overhead. MAGIC employs a learning-based architecture, integrating a temporal convolutional network (TCN) to classify gestures by capturing long-range temporal dependencies. To address the critical challenge of domain adaptation in gesture recognition, we propose adaptive temporal embedding network (ATEN), a meta-learning framework that combines the temporal modeling capabilities of TCN with task-specific adaptation mechanisms. We evaluateMAGIC through a comprehensive data collection campaign involving two subjects performing 10 micro gestures across three different environments, with synchronized video streams providing the ground truth. The proposed system achieves a baseline accuracy of 99.24% using TCN. The system continues to perform well – achieving up to 98.82% accuracy – when adapting to new domains using ATEN, outperforming other state-of-the-art domain adaptation methods by 14% on average.
2024
- Conferencem3MIMO: An 8\times 8 mmWave Multi-User MIMO Testbed for Wireless ResearchKhandaker Foysal Haque, Francesca Meneghello, KM Rumman, and Francesco RestucciaIn Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, 2024
In this paper, we present m3MIMO a mmWave fully-digital multi-user multi-input multi-output (MU-MIMO) testbed for advanced wireless research. m3MIMO operates in the 57-64 GHz frequency range and supports up to 1 GHz of bandwidth enabling large data multiplexing in the frequency domain through orthogonal frequency-division multiplexing (OFDM). The testbed features three custom-designed Zynq UltraScale+ RFSoC-based Software Defined Radios (SDRs) empowered with the Pi-Radio fully digital transceivers. Two of these SDRs support eight transmit and receive streams each (8 × 8 MIMO), while the third SDR supports up to four channels. m3MIMO supports three different communication modes: (i) point-to-point (P2P) transmissions; (ii) single-user multi-input multi-output (SU-MIMO), where multiple streams are transmitted to a single end-device; and (iii) MU-MIMO, where two devices are simultaneously served by a single transmitter. To showcase the m3MIMO’s versatility, we present two research use cases: tracking-based beamforming and mmWave-based sensing. We will open-source the m3MIMO code along with the relevant use-case datasets, facilitating further analysis
- JournalPredXGBR: A Machine Learning Framework for Short-Term Electrical Load PredictionRifat Zabin, Khandaker Foysal Haque, and Ahmed AbdelgawadElectronics, 2024
The growing demand for consumer-end electrical load is driving the need for smarter management of power sector utilities. In today’s technologically advanced society, efficient energy usage is critical, leaving no room for waste. To prevent both electricity shortage and wastage, electrical load forecasting becomes the most convenient way out. However, the conventional and probabilistic methods are less adaptive to the acute, micro, and unusual changes in the demand trend. With the recent development of artificial intelligence (AI), machine learning (ML) has become the most popular choice due to its higher accuracy based on time-, demand-, and trend-based feature extractions. Thus, we propose an Extreme Gradient Boosting (XGBoost) regression-based model—PredXGBR-1, which employs short-term lag features to predict hourly load demand. The novelty of PredXGBR-1 lies in its focus on short-term lag autocorrelations to enhance adaptability to micro-trends and demand fluctuations. Validation across five datasets, representing electrical load in the eastern and western USA over a 20-year period, shows that PredXGBR-1 outperforms a long-term feature-based XGBoost model, PredXGBR-2, and state-of-the-art recurrent neural network (RNN) and long short-term memory (LSTM) models. Specifically, PredXGBR-1 achieves an mean absolute percentage error (MAPE) between 0.98 and 1.2% and an 𝑅2 value of 0.99, significantly surpassing PredXGBR-2’s 𝑅2 of 0.61 and delivering up to 86.8% improvement in MAPE compared to LSTM models. These results confirm the superior performance of PredXGBR-1 in accurately forecasting short-term load demand.
- ConferenceIntegrated Sensing and Communication for Efficient Edge ComputingKhandaker Foysal Haque, Francesca Meneghello, and Francesco RestucciaIn 2024 20th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), 2024
Emerging mobile virtual reality (VR) systems are required to continuously perform complex computer vision tasks needing computational power that is excessive for mobile devices. Thus, techniques based on wireless edge computing (WEC) have been recently proposed. However, existing WEC methods require the transmission and processing of a high amount of video data which may ultimately saturate the wireless link. In this paper, we propose a novel sensing-assisted edge computing (ISAC-EC) approach to address this issue. ISAC-EC leverages knowledge about the physical environment to reduce the end-to-end latency and overall computational burden by transmitting to the edge server only the relevant data for the delivery of the service. Our intuition is that the transmission of the portion of the video frames where there are no changes with respect to the previous frames can be avoided. Through wireless sensing, only the part of the frames where any environmental change is detected is transmitted and processed. We evaluated ISAC-EC by using a 10K 360°camera with a Wi-Fi 6 sensing system operating at 160 MHz and performing localization and tracking. Experimental results show that ISAC-EC reduces both the channel occupation and end-to-end latency by more than 90% while improving the instance segmentation and object detection performance with respect to state-of-the-art WEC approaches. For reproducibility purposes, we pledge to share our dataset and code repository.
- LetterEvaluating the Impact of Channel Feedback Quantization and Grouping in IEEE 802.11 MIMO Wi-Fi NetworksFrancesca Meneghello, Khandaker Foysal Haque, and Francesco RestucciaIEEE Wireless Communications Letters, 2024
In this letter, we shed light on the impact of multiple-input, multiple-output (MIMO) beamforming feedback quantization and orthogonal frequency-division multiplexing (OFDM) sub-channel grouping on communication performance, for IEEE 802.11ac/ax Wi-Fi networks. We performed an extensive data collection campaign with commercial Wi-Fi devices deployed in different propagation environments considering several network configurations. Our objective is to provide a benchmark for research on efficient feedback compression mechanisms and to enable further analysis. As such, we pledge to share the datasets we collected and the emulation framework we developed.
- ConferenceBFA-Sense: Learning Beamforming Feedback Angles for Wi-Fi SensingKhandaker Foysal Haque, Francesca Meneghello, and Francesco RestucciaIn 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), 2024
In this paper, we propose BFA-Sense, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Wi-Fi sensing enables game-changing applications in remote healthcare, home entertainment, and home surveillance, among others. However, existing work leverages the manual extraction of the uncompressed channel state information (CSI) from Wi-Fi chips, which is not supported by the 802.11 standard-compliant devices and hence requires the use of specialized equipment. On the contrary, BFA-Sense leverages the compressed beamforming feedback angles (BFAs) transmitted during the standard-compliant sounding procedure to characterize the propagation environment. Conversely from the uncompressed CSI, BFAs (i) can be recorded without any firmware modification, and (ii) allows a single monitor device to simultaneously capture the channels between the access point and all the stations, thus providing much better sensitivity. We evaluate BFA-Sense through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We assess the cross-domain adaptability of BFA-Sense through embedding learning for tackling unseen environments with a few samples from the new environment. The results show that the proposed BFAs-based approach achieves about 11% more accuracy when compared to CSI-based prior work.
- PreprintSawec: Sensing-assisted wireless edge computingKhandaker Foysal Haque, Francesca Meneghello, Md Ebtidaul Karim, and Francesco RestucciaarXiv preprint arXiv:2402.10021, 2024
Emerging mobile virtual reality (VR) systems will require to continuously perform complex computer vision tasks on ultra-high-resolution video frames through the execution of deep neural networks (DNNs)-based algorithms. Since state-of-the-art DNNs require computational power that is excessive for mobile devices, techniques based on wireless edge computing (WEC) have been recently proposed. However, existing WEC methods require the transmission and processing of a high amount of video data which may ultimately saturate the wireless link. In this paper, we propose a novel Sensing-Assisted Wireless Edge Computing (SAWEC) paradigm to address this issue. SAWEC leverages knowledge about the physical environment to reduce the end-to-end latency and overall computational burden by transmitting to the edge server only the relevant data for the delivery of the service. Our intuition is that the transmission of the portion of the video frames where there are no changes with respect to previous frames can be avoided. Specifically, we leverage wireless sensing techniques to estimate the location of objects in the environment and obtain insights about the environment dynamics. Hence, only the part of the frames where any environmental change is detected is transmitted and processed. We evaluated SAWEC by using a 10K 360∘ with a Wi-Fi 6 sensing system operating at 160 MHz and performing localization and tracking. We considered instance segmentation and object detection as benchmarking tasks for performance evaluation. We carried out experiments in an anechoic chamber and an entrance hall with two human subjects in six different setups. Experimental results show that SAWEC reduces both the channel occupation and end-to-end latency by more than 90% while improving the instance segmentation and object detection performance with respect to state-of-the-art WEC approaches.
2023
- ConferenceWi-BFI: Extracting the IEEE 802.11 beamforming feedback information from commercial Wi-Fi devicesKhandaker Foysal Haque, Francesca Meneghello, and Francesco RestucciaIn Proceedings of the 17th ACM Workshop on Wireless Network Testbeds, Experimental evaluation & Characterization, 2023
Recently, researchers have shown that the beamforming feedback angles (BFAs) used for Wi-Fi multiple-input multiple-output (MIMO) operations can be effectively leveraged as a proxy of the channel frequency response (CFR) for different purposes. Examples are passive human activity recognition and device fingerprinting. However, even though the BFAs report frames are sent in clear text, there is not yet a unified open-source tool to extract and decode the BFAs from the frames. To fill this gap, we developed Wi-BFI, the first tool that allows retrieving Wi-Fi BFAs and reconstructing the beamforming feedback information (BFI) – a compressed representation of the CFR – from the BFAs frames captured over the air. The tool supports BFAs extraction within both IEEE 802.11ac and 802.11ax networks operating on radio channels with 160/80/40/20 MHz bandwidth. Both multi-user and single-user MIMO feedback can be decoded through Wi-BFI. The tool supports real-time and offline extraction and storage of BFAs and BFI. The real-time mode also includes a visual representation of the channel state that continuously updates based on the collected data. Wi-BFI code is open source and the tool is also available as a pip package.
- ConferenceDesigning of an underwater-internet of things (U-IoT) for marine life monitoringAsif Sazzad, Nazifa Nawer, Maisha Mahbub Rimi, K Habibul Kabir, and Khandaker Foysal HaqueIn The Fourth Industrial Revolution and Beyond: Select Proceedings of IC4IR+, 2023
Marine life and environmental monitoring of deep sea have become a major field of interest for quite a long time because of the immeasurable region of the area of the ocean that comes with its own dynamics and vulnerabilities. Creating the Underwater-Internet of Things (U-IoT) model within Underwater Wireless Sensor Network (UWSN) provides the scope of ensuring proper marine life monitoring which supports the aspects of 4th Industrial Revolution. The U-IoT network model is designed for an automated, efficient, smart process of data transfer for both underwater and overwater communications through acoustic waves and Radio Frequency (RF) data transfer techniques, respectively. The proposed U-IoT network model is created with an optimum number of autonomous underwater vehicles (AUVs) and surface sinks in order to address Bangladesh’s overfishing problem (e.g., hilsa overfishing problem) which guarantees efficient management of the banning period by the authority. The network model is evaluated by comparing different deployment methods of AUVs and surface sinks taking the South Patch region of Bay of Bengal as the target area. The result shows that the proposed model transfers adequate data of marine life motion from the seafloor can enhance efficient administration of the overfishing problem.
- ConferenceSimwisense: Simultaneous multi-subject activity classification through wi-fi signalsKhandaker Foysal Haque, Milin Zhang, and Francesco RestucciaIn 2023 IEEE 24th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2023
Recent advances in Wi-Fi sensing have ushered in a plethora of pervasive applications in home surveillance, remote healthcare, road safety, and home entertainment, among others. Most of the existing works are limited to the activity classification of a single human subject at a given time. Conversely, a more realistic scenario is to achieve simultaneous, multi-subject activity classification. The first key challenge in that context is that the number of classes grows exponentially with the number of subjects and activities. Moreover, it is known that Wi-Fi sensing systems struggle to adapt to new environments and subjects. To address both issues, we propose SiMWiSense, the first framework for simultaneous multi-subject activity classification based on Wi-Fi that generalizes to multiple environments and subjects. We address the scalability issue by using the Channel State Information (CSI) computed from the device positioned closest to the subject. We experimentally prove this intuition by confirming that the best accuracy is experienced when the CSI computed by the transceiver positioned closest to the subject is used for classification. To address the generalization issue, we develop a brand-new few-shot learning algorithm named Feature Reusable Embedding Learning (FREL). Through an extensive data collection campaign in 3 different environments and 3 subjects performing 20 different activities simultaneously, we demonstrate that SiMWiSense achieves classification accuracy of up to 97%, while FREL improves the accuracy by 85% in comparison to a traditional Convolutional Neural Network (CNN) and up to 20% when compared to the state-of-the-art few-shot embedding learning (FSEL), by using only 15 seconds of additional data for each class. For reproducibility purposes, we share our 1 TB dataset and code repository.1https://github.com/kfoysalhaque/SiMWiSense
- ConferenceEnergy Consumption Optimization of Zigbee Communication: An Experimental Approach with XBee S2C ModuleRifat Zabin, and Khandaker Foysal HaqueIn Proceedings of International Conference on Information and Communication Technology for Development: ICICTD, 2023
Zigbee is a short-range wireless communication standard that is based on IEEE 802.15.4 and is vastly used in both indoor and outdoor Internet of Things (IoT) applications. One of the basic constraints of Zigbee and similar wireless sensor networks (WSN) standards is limited power source as in most of the cases they are battery powered. Thus, it is very important to optimize the energy consumption to have a good network lifetime. Even though tuning the power transmission level to a lower value might make the network more energy efficient, it also hampers the network performances very badly. This work aims to optimize the energy consumption by finding the right balance and trade-off between the transmission power level and network performance through extensive experimental analysis. Packet delivery ratio (PDR) is taken into account for evaluating the network performance. This work also presents a performance analysis of both the encrypted and unencrypted Zigbee with the stated metrics in a real-world testbed, deployed in both indoor and outdoor scenarios. The major contribution of this work includes (i) to optimize the energy consumption by evaluating the most optimized transmission power level of Zigbee where the network performance is also good in terms of PDR (ii) identifying and quantizing the trade-offs of PDR, transmission power levels, current and energy consumption (iii) creating an indoor and outdoor Zigbee testbed based on commercially available Zigbee module XBee S2C to perform any sort of extensive performance analysis.
2022
- ConferenceConvolutional neural network (CNN) in COVID-19 detection: a case study with chest CT scan imagesKhandaker Nusaiba Hafiz, and Khandaker Foysal HaqueIn 2022 IEEE Region 10 Symposium (TENSYMP), 2022
Deep Learning, especially Convolutional Neural Net-works (CNN) have been performing very well for the last decade in medical image classification. CNN has already shown a great prospect in detecting COVID-19 from chest X-ray images. However, due to its three dimensional data, chest CT scan images can provide better understanding of the affected area through segmentation in comparison to the chest X-ray images. But the chest CT scan images have not been explored enough to achieve sufficiently good results in comparison to the X-ray images. However, with proper image pre-processing, fine tuning, and optimization of the models better results can be achieved. This work aims in contributing to filling this void in the literature. On this aspect, this work explores and designs both custom CNN model and three other models based on transfer learning: InceptionV3, ResNet50, and VGG19. The best performing model is VGG19 with an accuracy of 98.39% and F-1 score of 98.52%. The main contribution of this work includes: (i) modeling a custom CNN model and three pre-trained models based on InceptionV3, ResNet50, and VGG19 (ii) training and validating the models with a comparatively larger dataset of 1252 COVID-19 and 1230 non-COVID CT images (iii) fine tune and optimize the designed models based on the parameters like number of dense layers, optimizer, learning rate, batch size, decay rate, and activation functions to achieve better results than the most of the state-of-the-art literature (iv) the designed models are made public in [1] for reproducibility by the research community for further developments and improvements.
- JournalComprehensive performance analysis of zigbee communication: an experimental approach with XBee S2C moduleKhandaker Foysal Haque, Ahmed Abdelgawad, and Kumar YelamarthiSensors, 2022
The recent development of wireless communications has prompted many diversified applications in both industrial and medical sectors. Zigbee is a short-range wireless communication standard that is based on IEEE 802.15.4 and is vastly used in both indoor and outdoor applications. Its performance depends on networking parameters, such as baud rates, transmission power, data encryption, hopping, deployment environment, and transmission distances. For optimized network deployment, an extensive performance analysis is necessary. This would facilitate a clear understanding of the trade-offs of the network performance metrics, such as the packet delivery ratio (PDR), power consumption, network life, link quality, latency, and throughput. This work presents an extensive performance analysis of both the encrypted and unencrypted Zigbee with the stated metrics in a real-world testbed, deployed in both indoor and outdoor scenarios. The major contributions of this work include (i) evaluating the most optimized transmission power level of Zigbee, considering packet delivery ratio and network lifetime; (ii) formulating an algorithm to find the network lifetime from the measured current consumption of packet transmission; and (iii) identifying and quantizing the trade-offs of the multi-hop communication and data encryption with latency, transmission range, and throughput.
- JournalConvolutional-neural-network-based handwritten character recognition: an approach with massive multisource dataNazmus Saqib, Khandaker Foysal Haque, Venkata Prasanth Yanambaka, and Ahmed AbdelgawadAlgorithms, 2022
Neural networks have made big strides in image classification. Convolutional neural networks (CNN) work successfully to run neural networks on direct images. Handwritten character recognition (HCR) is now a very powerful tool to detect traffic signals, translate language, and extract information from documents, etc. Although handwritten character recognition technology is in use in the industry, present accuracy is not outstanding, which compromises both performance and usability. Thus, the character recognition technologies in use are still not very reliable and need further improvement to be extensively deployed for serious and reliable tasks. On this account, characters of the English alphabet and digit recognition are performed by proposing a custom-tailored CNN model with two different datasets of handwritten images, i.e., Kaggle and MNIST, respectively, which are lightweight but achieve higher accuracies than state-of-the-art models. The best two models from the total of twelve designed are proposed by altering hyper-parameters to observe which models provide the best accuracy for which dataset. In addition, the classification reports (CRs) of these two proposed models are extensively investigated considering the performance matrices, such as precision, recall, specificity, and F1 score, which are obtained from the developed confusion matrix (CM). To simulate a practical scenario, the dataset is kept unbalanced and three more averages for the F measurement (micro, macro, and weighted) are calculated, which facilitates better understanding of the performances of the models. The highest accuracy of 99.642% is achieved for digit recognition, with the model using ‘RMSprop’, at a learning rate of 0.001, whereas the highest detection accuracy for alphabet recognition is 99.563%, which is obtained with the proposed model using ‘ADAM’ optimizer at a learning rate of 0.00001. The macro F1 and weighted F1 scores for the best two models are 0.998, 0.997:0.992, and 0.996, respectively, for digit and alphabet recognition.
2021
- ChapterProspects of Internet of Things (IoT) and Machine Learning to Fight Against COVID-19Khandaker Foysal Haque, and Ahmed AbdelgawadIn Advanced Systems for Biomedical Applications, 2021
IoT and Machine Learning has improved multi-fold in recent years and they have been playing a great role in healthcare systems which includes detecting, screening and monitoring of the patients. IoT has been successfully detecting different heart diseases, Alzheimer disease, helping autism patients and monitoring patients’ health condition with much lesser cost but providing better efficiency, reliability and accuracy. IoT also has a great prospect in fighting against COVID-19. This chapter discusses different aspects of IoT in aiding healthcare systems for detecting and monitoring Coronavirus patients. Two such IoT based models are also designed for automatic thermal monitoring and for measuring and real-time monitoring of heart rate with wearable IoT devices. Convolutional Neural Networks (CNN) is a Machine Learning algorithm that has been performing well in detecting many diseases including Coronary Artery Disease, Malaria, Alzheimer’s disease, different dental diseases, and Parkinson’s disease. Like other cases, CNN has a substantial prospect in detecting COVID-19 patients with medical images like chest X-rays and CTs. Detecting Corona positive patients is very important in preventing the spread of this virus. On this conquest, a CNN model is proposed to detect COVID-19 patients from chest X-ray images. Two CNN models with different number of convolution layers and three other models based on ResNet50, VGG-16 and VGG-19 are evaluated with comparative analytical analysis. The proposed model performs with an accuracy of 97.5% and a precision of 97.5%. This model gives the Receiver Operating Characteristic (ROC) curve area of 0.975 and F1-score of 97.5. It can be improved further by increasing the dataset for training the model.
- ConferenceD2D-LoRa latency analysis: An indoor application perspectiveNazmus Saqib, Khandaker Foysal Haque, Kumar Yelamarthi, Prasanath Yanambaka, and Ahmed AbdelgawadIn 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), 2021
LoRaWAN is one of the popular Internet of Things (IoT) wireless technologies due to its versatility, long transmission range, and low power communication capabilities. In LoRaWAN, the data from the source to the destination is routed through the gateways, increasing the communication latency. The higher latency being a barrier in real-time applications has prompted researchers to employ the physical (PHY) layer of the LoRaWAN protocol -commonly termed as LoRa, which utilizes the Device-to-Device (D2D) based communication techniques to minimize the communication latency. However, an extensive analysis of the D2D-LoRa is needed for optimizing and better designing the network, which is missing in the current literature. To address the void, this paper analyzes the latency performance of the D2D based LoRa by varying Spreading Factors (SF) and bandwidths and explores the trade-offs with experimental deployment in a 110 m long indoor environment. The evaluation shows that with SF 7 and bandwidth 500 kHz, the communication latency is minimum which is 33.67 ms at 0 m and 53 ms at 110 m for the data packet of 13 bytes for each of the cases.
2020
- ConferenceA LoRa based reliable and low power vehicle to everything (V2X) communication architectureKhandaker Foysal Haque, Ahmed Abdelgawad, Venkata P Yanambaka, and Kumar YelamarthiIn 2020 IEEE International Symposium on Smart Electronic Systems (iSES)(Formerly iNiS), 2020
The industrial development of the last few decades has prompt to increase in the number of vehicles multi-fold. With the increased number of vehicles on the road, safety has become one of the major concerns. Inter vehicular communication, specially Vehicle to Everything (V2X) communication can address these pressing issues including autonomous traffic systems and autonomous driving. Extensive research is going on to develop a reliable V2X communication architecture with different wireless technologies like Long Range (LoRa) communication, Zigbee, LTE, and 5G. The reliability and effectiveness of V2X communication greatly depend on communication architecture and the associated wireless technology. In this conquest, a LoRa based reliable, robust, and low power V2X communication architecture is proposed in this paper. The communication architecture is designed, implemented, and tested in a real world scenario to evaluate its reliability. Testing and analysis suggest a vehicle in the road can communicate reliably with roadside infrastructures at different speeds ranging from (10-30) Miles per Hour (MPH) with the proposed architecture. At 10 MPH, a vehicle sends one data packet of 40 bytes every 27 meters and at 30 MPH, it sends the same data packet every 53 meters with smooth transitioning from communicating with one infrastructure to another.
- JournalLora architecture for v2x communication: An experimental evaluation with vehicles on the moveKhandaker Foysal Haque, Ahmed Abdelgawad, Venkata Prasanth Yanambaka, and Kumar YelamarthiSensors, 2020
The industrial development of the last few decades has prompted an increase in the number of vehicles by multiple folds. With the increased number of vehicles on the road, safety has become one of the primary concerns. Inter vehicular communication, specially Vehicle to Everything (V2X) communication can address these pressing issues including autonomous traffic systems and autonomous driving. The reliability and effectiveness of V2X communication greatly depends on communication architecture and the associated wireless technology. Addressing this challenge, a device-to-device (D2D)-based reliable, robust, and energy-efficient V2X communication architecture is proposed with LoRa wireless technology. The proposed system takes a D2D communication approach to reduce the latency by offering direct vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, rather than routing the data via the LoRa WAN server. Additionally, the proposed architecture offers modularity and compact design, making it ideal for legacy systems without requiring any additional hardware. Testing and analysis suggest the proposed system can communicate reliably with roadside infrastructures and other vehicles at speeds ranging from 15–50 km per hour (kmph). The data packet consists of 12 bytes of metadata and 28 bytes of payload. At 15 kmph, a vehicle sends one data packet every 25.9 m, and at 50 kmph, it sends the same data packet every 53.34 m with reliable transitions.
- ConferenceAutomatic detection of COVID-19 from chest X-ray images with convolutional neural networksKhandaker Foysal Haque, Fatin Farhan Haque, Lisa Gandy, and Ahmed AbdelgawadIn 2020 international conference on computing, electronics & communications engineering (iCCECE), 2020
Deep Learning has improved multi-fold in recent years and it has been playing a great role in image classification which also includes medical imaging. Convolutional Neural Networks (CNN) has been performing well in detecting many diseases including Coronary Artery Disease, Malaria, Alzheimer’s disease, different dental diseases, and Parkinson’s disease. Like other cases, CNN has a substantial prospect in detecting COVID-19 patients with medical images like chest X-rays and CTs. Coronavirus or COVID-19 has been declared a global pandemic by the World Health Organization (WHO). Till July 11, 2020, the total COVID-19 confirmed cases are 12.32 M and deaths are 0.556 M worldwide. Detecting Corona positive patients is very important in preventing the spread of this virus. On this conquest, a CNN model is proposed to detect COVID-19 patients from chest X-ray images. This model is evaluated with a comparative analysis of two other CNN models. The proposed model performs with an accuracy of 97.56% and a precision of 95.34%. This model gives the Receiver Operating Characteristic (ROC) curve area of 0.976 and F1-score of 97.61. It can be improved further by increasing the dataset for training the model.
- JournalA deep learning approach to detect COVID-19 patients from chest X-ray imagesKhandaker Foysal Haque, and Ahmed AbdelgawadAi, 2020
Deep Learning has improved multi-fold in recent years and it has been playing a great role in image classification which also includes medical imaging. Convolutional Neural Networks (CNNs) have been performing well in detecting many diseases including coronary artery disease, malaria, Alzheimer’s disease, different dental diseases, and Parkinson’s disease. Like other cases, CNN has a substantial prospect in detecting COVID-19 patients with medical images like chest X-rays and CTs. Coronavirus or COVID-19 has been declared a global pandemic by the World Health Organization (WHO). As of 8 August 2020, the total COVID-19 confirmed cases are 19.18 M and deaths are 0.716 M worldwide. Detecting Coronavirus positive patients is very important in preventing the spread of this virus. On this conquest, a CNN model is proposed to detect COVID-19 patients from chest X-ray images. Two more CNN models with different number of convolution layers and three other models based on pretrained ResNet50, VGG-16 and VGG-19 are evaluated with comparative analytical analysis. All six models are trained and validated with Dataset 1 and Dataset 2. Dataset 1 has 201 normal and 201 COVID-19 chest X-rays whereas Dataset 2 is comparatively larger with 659 normal and 295 COVID-19 chest X-ray images. The proposed model performs with an accuracy of 98.3% and a precision of 96.72% with Dataset 2. This model gives the Receiver Operating Characteristic (ROC) curve area of 0.983 and F1-score of 98.3 with Dataset 2. Moreover, this work shows a comparative analysis of how change in convolutional layers and increase in dataset affect classifying performances.
- JournalAdvancement of routing protocols and applications of underwater wireless sensor network (UWSN)—A surveyKhandaker Foysal Haque, K Habibul Kabir, and Ahmed AbdelgawadJournal of Sensor and Actuator Networks, 2020
Water covers a greater part of the Earth’s surface. However, little knowledge has been achieved regarding the underwater world as most parts of it remain unexplored. Oceans, including other water bodies, hold substantial natural resources and also the aquatic lives. These are mostly undiscovered and unknown due to the unsuited and hazardous underwater environments for the human. This inspires the unmanned exploration of these dicey environments. Neither unmanned exploration nor the distant real-time monitoring is possible without deploying Underwater Wireless Sensor Network (UWSN). Consequently, UWSN has drawn the interests of the researchers recently. This vast underwater world is possible to be monitored remotely from a distant location with much ease and less risk. The UWSN is required to be deployed over the volume of the water body to monitor and surveil. For vast water bodies like oceans, rivers and large lakes, data is collected from the different heights/depths of the water level which is then delivered to the surface sinks. Unlike terrestrial communication and radio waves, conventional mediums do not serve the purpose of underwater communication due to their high attenuation and low underwater-transmission range. Instead, an acoustic medium is able to transmit data in underwater more efficiently and reliably in comparison to other mediums. To transmit and relay the data reliably from the bottom of the sea to the sinks at the surface, multi-hop communication is utilized with different schemes. For seabed to surface sink communication, leading researchers proposed different routing protocols. The goal of these routing protocols is to make underwater communications more reliable, energy-efficient and delay efficient. This paper surveys the advancement of some of the routing protocols which eventually helps in finding the most efficient routing protocol and some recent applications for the UWSN. This work also summarizes the remaining challenging issues and the future trends of those considered routing protocols. This survey encourages further research efforts to improve the routing protocols of UWSN for enhanced underwater monitoring and exploration.
- ConferenceAn IoT based efficient waste collection system with smart binsKhandaker Foysal Haque, Rifat Zabin, Kumar Yelamarthi, Prasanth Yanambaka, and Ahmed AbdelgawadIn 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020
Waste collection and management is an integrated part of both city and village life. Lack of optimized and efficient waste collection system vastly affect public health and costs more. The prevailing traditional waste collection system is neither optimized nor efficient. Internet of Things (IoT) has been playing a great role in making human life easier by making systems smart, adequate and self-sufficient. Thus, this paper proposes an IoT based efficient waste collection system with smart bins. It does real-time monitoring of the waste bins and determines which bins are to emptied in every cycle of waste collection. The system also presents an enhanced navigation system that shows the best route to collect wastes from the selected bins. Four waste bins are assumed in the city of Mount Pleasant, Michigan at random location. The proposed system decreases the travel distance by 30.76% on an average in the assumed scenario, compared to the traditional waste collection system. Thus it reduces the fuel cost and human labor making the system optimized and efficient by enabling real-time monitoring and enhanced navigation.
- ConferenceAn energy-efficient and reliable RPL for IoTKhandaker Foysal Haque, Ahmed Abdelgawad, Venkata P Yanambaka, and Kumar YelamarthiIn 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), 2020
Routing Protocol for Low-Power and Lossy Networks (RPL) is an IPv6 routing protocol that is standardized for the Internet of Things (IoT) by Internet-Engineering Task Force (IETF). RPL forms a tree-like topology which is based on different optimizing process called Objective Function (OF). In most cases, IoT has to deal with low power devices and lossy networks. So, the major constraints of the RPL are limited power source, network life time and reliability of the network. OFs depend on different metrics like Expected Transmission Count (ETX), Energy, Received Signal Strength Indicator (RSSI) for route optimization. In this work, the ETX and Energy based OF have been evaluated in terms of energy-efficiency and reliability. For one sink and nine senders, the simulated average power consumption is 1.291 mW and 1.56 mW respectively, for ETX OF and Energy OF. On the other hand, the average hop count for ETX OF is 1.89, which is 3.01 for Energy OF. Thus, ETX OF is more energy-efficient but it is not reliable as it takes fewer hops with long distances. Moreover, it does not take load balancing and link quality into account. However, Energy OF is more reliable due to short hops, but it is not energy efficient and sometimes it might take unnecessary hops.
2019
- ConferenceAn optimized stand-alone green hybrid grid system for an offshore Island, Saint Martin, BangladeshKhandaker Foysal Haque, Nazmus Saqib, and Md Shamim RahmanIn 2019 International Conference on Energy and Power Engineering (ICEPE), 2019
Saint Martin’s island is the largest offshore island of Bangladesh which is one of the most beautiful tourist spots in the world. But as the island is far away from the mainland, it is not connected to the main grid of the country. This paper proposes an optimized stand-alone green hybrid system to supply electricity for the inhabitants & tourists of the island. Considering 1000 households for all of its inhabitants and 200 hotel rooms for tourists, the average daily load is 1135.82 kWh/day with an annual peak load of 227.76 kW. The aim of this paper is to design the most cost efficient optimized standalone green hybrid system which provides zero emission and 100% renewable fraction. HOMER(Hybrid Optimization Model for Multiple Energy Resources) is used to design this system. The simulation results show that a hybrid system with 659 kW PV array, 3073 strings of batteries, 245 kW converter forms the most optimized stand-alone system with COE(Cost of Energy) of 0.266andNPC(NetPresentCost)of1379832. Significantly, energy cost of the proposed system is viable in context with socioeconomic condition of the country which will eventually provide the power solution maintaining the scenic beauty of the island.
- ConferenceAnalysis of grid integrated PV system as home RES with net metering schemeNazmus Saqib, Khandaker Foysal Haque, Rifat Zabin, and Sayed Nahian PreontoIn 2019 international conference on robotics, electrical and signal processing techniques (ICREST), 2019
To meet the increased demand of electricity, PV system is being used as home RES (Renewable Energy Source) throughout the world. In this paper, a grid integrated PV system has been proposed with net metering scheme. A home of 149 sq. meter in Dhaka city is considered whose average daily load is 11.27 kWh/day with an annual peak load of 1.21 kW. According to DESCO (Dhaka Electric Supply Company Limited), for the span of last one year (July, 2017-July, 2018) the monthly electricity usage of this home varies from 401-600 units (kWh) with a Cost of Energy (COE) of $0.1. Simulation and analysis of the proposed system shows that the Cost of Energy (COE) and Net present Cost (NPC) of the proposed system can be reduced to a great extent with the application of net metering scheme which also improves the renewable fraction of the system.