Author: Vian S. Al-Doori

A simplified spatial modulation MISO-OFDM scheme

Index modulation is one of the promising techniques for future communications systems due to many improvement over the classical orthogonal frequency division multiplexing systems such as single RF chain, increased throughput for the same modulation order, achieved tradeoff between the efficiencies of the power and the spectral, and elimination of inter-channel interference. Many forms of index modulation researches exist where symbols are conveyed in antennas, subcarriers, time slots, and the space-time matrix. Spatial modulation is one member of index modulation family where symbols are conveyed in activating transmit/receive antennas. In this paper, a modification to a standard multiple input single output scheme by integrating spatial modulation using simplified mathematical procedure is achieved. In the transmitter side, data and activation symbols are distributed simultaneously using mathematical module and floor functions. At the receiver, a simplified maximum likelihood detector is used to obtain transmitted pair of symbols. To verify this, MATLAB simulink is used to simulate a downlink system where spatial modulation is applied to a base station. Results for different transmit antenna number and modulation order are obtained in the form of bit error rate versus signal to noise ratio. DOI: 10.12928/TELKOMNIKA.V18I4.13873

Space Division Multiple Access Base Station (SDMA) Based on Block Adaptive Euclidean Direction Search Algorithm

An impulse response is present in long-length wireless communication channels in communication systems such as massive MIMO or 5G systems that use block coding, such as Low-Density Parity-Check (LDPC) code. To deal with it, it is favorable to use block adaptive filtering instead of sequential or traditional algorithms. In this study, a block adaptive algorithm called the Speedy Euclidean Direction Search (SEDS) algorithm is proposed for a Space Division Multiple Access Base Station (SDMA) to perform adaptive beamforming operation of the system. An investigation and analysis of the performance of the SEDS algorithm were conducted using SDMA with a Multiple Input Single Output (MISO) downlink scheme that uses block processing of the data samples for a single-cell downlink. Moreover, a novel and fair comparative study was carried out between SEDS and the Block adaptive Least Mean Square (BLMS) algorithm. As far as we know, no research has used the BLMS algorithm for the SDMA system, which is another contribution of this study. The simulation results demonstrate that the SEDS algorithm has fast convergence and a very accurate estimation compared with BLMS. Moreover, the SEDS algorithm has better performance in indoor environments compared with BLMS. On the other hand, SEDS starts to be unstable for a large value of block length (L), whereas BLMS stays stable especially for outdoor environments. DOI: 10.5573/IEIESPC.2022.11.2.133

Accurate Recognition of Natural language Using Machine Learning and Feature Fusion Processing

To enhance the performance of Chinese language pronunciation evaluation and speech recognition systems, researchers are focusing on developing intelligent techniques for multilevel fusion processing of data, features, and decisions using deep learning-based computer-aided systems. With a combination of score level, rank level, and hybrid level fusion, as well as fusion optimization and fusion score improvement, these systems can effectively combine multiple models and sensors to improve the accuracy of information fusion. Additionally, intelligent systems for information fusion, including those used in robotics and decision-making, can benefit from techniques such as multimedia data fusion and machine learning for data fusion. Furthermore, optimization algorithms and fuzzy approaches can be applied to data fusion applications in cloud environments and e-systems, while spatial data fusion can be used to enhance the quality of image and feature data In this paper, a new approach has been presented to identify the tonal language in continuous speech. This study proposes the Machine learning-assisted automatic speech recognition framework (ML-ASRF) for Chinese character and language prediction. Our focus is on extracting highly robust features and combining various speech signal sequences of deep models. The experimental results demonstrated that the machine learning neural network recognition rate is considerably higher than that of the conventional speech recognition algorithm, which performs more accurate human-computer interaction and increases the efficiency of determining Chinese language pronunciation accuracy. DOI: 10.54216/FPA.100108

A machine learning approach for risk factors analysis and survival prediction of Heart Failure patients

In this study, we propose machine learning (ML) for risk factors analysis and survival prediction of Heart Failure (HF) patients using a survival dataset. Five supervised ML methods are applied to the dataset: Decision Tree (DT), Decision Tree Regressor (DTR), Random Forest (RF), XGBoost, and Gradient Boosting (GB) algorithms. We compare the applied algorithms’ performances based on accuracy, precision, recall, F-measure, and log loss value and show RF provides the highest accuracy of 97.78%. The analysis of the risk factors shows the most predictive features based on coefficients and feature importance. The top six risk factors for HF patients are serum creatinine (SC), age, ejection fraction (EF), platelets, creatinine phosphokinase (CPK), and SS (SS). Further analysis of these factors shows significant clustering of the features. The survival analysis finds that the increment of SC, age, and SS and the decrement of EF are the most significant risk factors for HF patients. Our results suggest that HF survival prediction is possible with higher accuracy using the proposed model. Our ML models are useful in clinical settings for screening patients with HF probability. DOI: 10.1016/j.health.2023.100182