Saturday, June 06, 2020

These articles have been peer-reviewed and accepted for publication in JICT, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the JICT standard. Additionally, titles, authors, abstracts and keywords may change before publication.
1Fatima-zahra El-Alami, 1Abdelkader El Mahdaouy, 2,1Said Ouatik El Alaoui & 1Noureddine En-Nahnahi 
1Laboratory of Informatics and Modeling, FSDM, Sidi Mohamed Ben Abdellah University, Fez, Morocco.
2Ibn Tofail University, National School of Applied Sciences, Kenitra, Morocco.;;; 


Arabic text representation is a challenging assignment for several applications such as text categorization and clustering since the Arabic language is known for its variety, richness and complex morphology. Until recently, the Bag-Of-Words remains the most common method for Arabic text representation. However, it suffers from several shortcomings such as semantics deficiency and high dimensionality of the feature space. Moreover, most of existing methods ignore the explicit knowledge contained in semantic vocabularies as Arabic WordNet. To conquer these shortcomings, we propose a deep Autoencoder-based representation for Arabic text categorization. It consists of three stages: (1) Extracting from Arabic WordNet the most relevant concepts relying on feature selection process (2) Features learning via an unsupervised algorithm for text representation (3) Categorizing text using deep Autoencoder. Our method allows to consider document semantics by combining both implicit and explicit semantics and to reduce the feature space dimensionality. To evaluate our method, we conducted several experiments on the standard Arabic dataset OSAC. The obtained results show the effectiveness of the proposed method compared to the state-of-the-art ones.

Keywords: Arabic text representation, deep Autoencoder, feature selection, Restricted Boltzmann Machines, Text categorization.

1Athraa Jasim Mohammed, 1Khalil Ibrahim Ghathwan & 2Yuhanis Yusof
1Computer Science Department, University of Technology - Iraq, Iraq
2School of Computing, Universiti Utara Malaysia, Malaysia
{10872, 110039};;



Least Squares Support Vector Machine (LSSVM) has been known to be one of the effective forecasting models. However, its operation relies on two important parameters (regularization and kernel). Pre-determining the parameters values will affect the result of forecasting model; hence, to find the optimal value of these parameters, this study investigates the adaptation of Bat and Cuckoo Search Algorithms to optimize LSSVM parameters. Even though Cuckoo Search Algorithm has been proven to be able to solve global optimization in various areas, the algorithm leads to a slow convergence rate when the step size is large. Hence, to enhance the search ability of Cuckoo Search Algorithm, it is integrated with Bat algorithm that offers a balance search between global and local. To further analyze the strength of Bat and Cuckoo to optimize LSSVM parameters, evaluation was performed separately. Five evaluation metrics were utilized; Mean Average Percent Error (MAPE), Accuracy, Symmetric Mean Absolute Percent Error (SMAPE), Root Mean Square Percentage Error (RMSPE) and Fitness. Experimental results on diabetes forecasting demonstrate that the proposed BAT-LSSVM and CUCKOO-LSSVM generate lower MAPE and SMAPE, while producing higher Accuracy and Fitness compared to a PSO-LSSVM and a non-optimized LSSVM. Following to the success, this study integrates the two algorithms to optimize the LSSVM. The newly proposed forecasting algorithm, termed as CUCKOO-BAT-LSSVM, produces better forecasting in terms of MAPE, Accuracy and RMSPE. Such an outcome provides an alternative model to be used in facilitating decision making in forecasting.

Keywords: time series forecasting, Least Squares Support Vector Machine, Bat algorithm, Particle Swarm Optimization algorithm, Cuckoo Search algorithm.

Meennapa Rukhiran & Paniti Netinant*
College of Digital Innovation and Information Technology (DIIT), Rangsit University, Bangkok 12000, Thailand;​
End users are involved in improving software development processes. Nowadays, User Interface (UI) and User Experience (UX) are particularly concerned for end user interactions in many software designs. Most methodologies have inconsistencies between design and implementation. While complex software is more difficult to make changes, personal finance application is one of the more complex software to design, development, and adapt. This paper proposes a development of the mobile personal financial application towards informative multidimensional layering. We have separated the functional data cutting across the relationships of the three categories and datasets. We have represented operational semantics of dimensions, and combined layers of three-dimensional information and the aspect elements through components. The corresponsive composition of end user features using visual interfaces is concerned. Three-layer User Interface Composition Model is illustrated to transfer and compose layers, functional data, aspect elements, and components to Graphical User Interfaces. Therefore, an integrated view of the software system seems to make the design and implementation consistent to support our framework more straightforward. Few works have been represented by a practical model of mobile informative multidimensional layering. This research applied aspect orientation and informative multidimensional layering to represent better feature models for mobile personal finance application development. We deliver a practical framework of the mobile personal finance application for all four phases of analysis, design, implementation, and evaluation. In addition, to dispense the gap, this research presents more clearly the operations of three-dimensional models, functional data, and aspect elements that are cut across through informative multidimensional layering.
Keywords: Functional data, multidimensional data, three-dimensional layering, personal finance, user interface.

Shamini Raja Kumaran, Mohd Shahizan Othman & Lizawati Mi Yusuf
Faculty Engineering, School of Computing, Universiti Teknologi Malaysia;;
Missing values were a huge constraint in microarray technologies toward improving and identifying the disease-causing genes. Estimating missing values is an undeniable scenario faced by the experts and the imputation methods are an effective way to impute the proper values to proceed with the next processes in microarray technology. Missing value imputation approaches may increase the classification accuracy. Although these approaches might predict the values, the accuracy rates prove its abilities of these approaches to identify the missing values in the gene expression data. In this article, a novel approach, optimized hybrid fuzzy c-Means and majority vote (opt-FCMMV), is proposed to identify the missing values in the data. Using the majority vote (MV) and optimization through particle swarm optimization (PSO), this approach predicts the missing values in the data to form more informative and solid data. In order to verify the effectiveness of opt-FCMMV, several experiments were carried out on two publicly available microarray datasets (ovary and lung cancer samples) under three missing value mechanisms with five different percentage values in the biomedical domain using Support Vector Machine (SVM) classifier. The experimental results showed that the proposed approach functioned efficiently by showcasing highest accuracy rates compared to no imputations, FCM and FCMMV. For exemplary, the accuracy rates for Ovary data with 5% missing values were no imputation (64.0%), FCM (81.8%), FCMMV (90.0%) and opt-FCMMV (93.7%). While for future work, other metaheuristic algorithms can be used to solve missing values via optimization to improve the performance of data.
Keywords: fuzzy c-Means, majority vote, missing values, microarray data, data optimization.

1Yoanes Bandung & 1Joshua Tanuraharja
1School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Indonesia;
QoS provisioning for real-time multimedia applications is largely determined by network’s available bandwidth. Until now there is no standard method for estimating bandwidth on wireless networks. Therefore, in this study a mathematical model called Modified Passive Available Bandwidth Estimation (MPABE) was developed to estimate the available bandwidth passively on a Distributed Coordination Function (DCF) wireless network on the IEEE 802.11 protocol. The mathematical model developed was a modification of the three existing mathematical models, namely Available Bandwidth Estimation (ABE), Cognitive Passive Estimation of Available Bandwidth V2 (cPEAB-V2), and Passive Available Bandwidth Estimation (PABE). The proposed mathematical model gives emphasis on what will be faced to estimate available bandwidth and will help in the coming up of the strategies to estimate available bandwidth on the IEEE 802.11. The developed mathematical model consisted of idle-period synchronization between sender and receiver, the overhead probability occurring in the Medium Access Control (MAC) layer, as well as the successful packet transmission probability. A successful packet transmission is influenced by three variables, the packet collision probability caused by a number of neighboring nodes, the packet collision probability caused by traffic from the hidden nodes, and the packet error probability. The proposed mathematical model was tested by comparing it with other relevant mathematical models. The performance of the four mathematical models was compared with the actual bandwidth. Using a series of experiments that have been done, it is found that the proposed mathematical model is approximately 26% more accurate than the ABE, 36% more accurate than the cPEAB-V2, and 32% more accurate than the PABE.
Keywords: Available bandwidth estimation, distributed coordination function, IEEE 802.11, hidden nodes.

1Abdullah Mohammed Rashid,2Ali A.Yassin,1Ahmed A. Alkadhmawee & 2Abdulla J. Yassin
1Education College for Human Science, University of Basrah.
2Computer Dept., Education College for Pure Sciences, University of Basrah.;
Nowadays, a lot of images and documents are saved on data sets and cloud servers such as certificates, personal images, and passport. These images and documents are utilized in several applications to serve the resident living in smart cities. Image similarity is considered as one of the smart city's applications. The major challenges faced in the field of image management are searching and retrieving the images. This is because searching based on image content requires a long time. In this paper, the researchers present a secure scheme to retrieve images in the smart city to identify the wanted criminals by using the Gray Level Co-occurrence Matrix (GLCM). The proposed scheme extracts only five features of the query image which are (Contrast, Homogeneity, Entropy, Energy, and Dissimilarity). This work consists of six phases which are registration, authentication, face detection, features extraction, image similarity, and image retrieval. The current study runs on a database of 810 images which was borrowed from face94 to measure the performance of image retrieving. The results of the experiment show the average of precision (AP) is 97.6 and average of recall (AR) is 6.3. Compared with the results of two previous studies, the current study comes out with inspiring results.
Keywords: Image Retrieves, Image Similarity, Extracted Features, Smart City Security.

Nur Nabila Mohamed, Yusnani Mohd Yussoff, Mohammed Ahmed Mohammed Saleh & Habibah Hashim
Faculty of Electrical Engineering, Universiti Teknologi MARA, Selangor, Malaysia;;;
Cryptography is described as the study of encrypting or secret writing of data using logical and mathematical principles to protect information. This technique has grown in importance in computing technologies for banking service, medical system, transportation and other Internet of Things (IoT)-based applications which have been subjected to increasing security concerns. In cryptography, each of the schemes is built with its own strength respectively, but implementation of single cryptographic scheme into the system has some disadvantages. For instance, symmetric encryption method provides a cost-effective technique of securing data without compromising security, however, sharing the secret key is a vital problem. On the other hand, asymmetric scheme solves the secret key distribution issue yet the standalone technique is slow and consumes more computer resources compared to the symmetric encryption. In contrast, hashing function generates a unique and fixed-length signature for a message to provide data integrity but the method is only a one-way function which is infeasible to invert. As an alternative to solve the security weakness of every single scheme, integration of several cryptographic schemes which also called the hybridization technique is being proposed offering the efficiency of securing data and solving the issue of key distribution. Herein, a review study of articles related to hybrid cryptographic approach from the year 2013 until 2018 is presented. Current IoT domains that implemented hybrid approaches have been identified and the review has been done according to the category of the domain. The significant finding from this literature review is the exploration of various IoT domains that implemented hybrid cryptographic techniques for improving the performance in related works. From the findings, it can be concluded that the hybrid cryptographic approach has been implemented in many IoT cloud computing services. Additionally, AES and ECC are found to be the most popular methods used in the hybrid approach due to its computing speed and security resistance among other schemes.
Keywords: hybrid cryptographic approach; internet of things; symmetric encryption; asymmetric encryption; cryptographic hash function.

1Solomon Adelowo Adepoju, 2Ishaq Oyebisi Oyefolahan, 1Muhammed Bashir Abdullahi & 3Adamu Alhaji Mohammed
1 Department of Computer Science, Federal University of Technology Minna, Nigeria
2 Department of Information and Media Technology, Federal University of Technology Minna, Nigeria
3Department of Mathematics, Federal University of Technology Minna, Nigeria
{solo.adepoju, o.ishaq, el.bashir02, adamu.alhaj}
Websites are very important to every organisation and tremendous efforts are been made to design websites that have a very good look and feel, and not only usable but of high quality. However, one critical task is how to evaluate these websites to ensure users are satisfied with its’ quality and usability. Though a variety of methods and approaches have been proposed, there is still currently an increase in research efforts to model websites quality and usability evaluation from decision-makers point of view which existing methods do not handle. Thus, this has led to the use of multi-criteria decision-making (MCDM) approach been applied in the evaluation of websites to handle the complexity in decision making. This paper, therefore, provides a review of the various MCDM methods that have been used in usability and quality evaluation of websites. The search strategy adopted identified a total of 63 published articles in peer-reviewed journals and international conferences between 2005 and 2017. From the research questions formulated for the study, the papers are classified into various MCDM approaches, the genre of websites, number and list of criteria used over the years and the localization of the websites based on country. Some of the findings show that the Analytical Hierarchy Process approach integrated with fuzzy logic is the most common method over the years. Also, e-commerce websites are the most common websites genres. Furthermore, most websites used are from Turkey and the average number of criteria is five.
Keywords: Multi-criteria decision making, website quality, website usability, website evaluation.

1Noor Huzaimi@Karimah Mohd Noor, 2Shahrul Azman Mohd Noah & 2Mohd Juzaiddin Ab Aziz
1Faculty of Computing, Universiti Malaysia Pahang, Malaysia
2Faculty of Information Science & Technology, Universiti Kebangsaan Malaysia, Malaysia; shahrul,
Anaphor candidate determination is an important process in Anaphora Resolution (AR) systems. There are several types of anaphor, one of which is pronominal anaphor. Pronominal anaphor is an anaphor that involves pronoun. In some of the cases, certain pronouns can be used without referring to any situation or entities in a text, and this phenomenon is known as pleonastic. In the case of the Malay language, it usually occurs for the pronoun nya. The pleonastic that exists in every text causes a severe problem to the anaphora resolution systems. The process to determine the pleonastic nya is not the same as identifying pleonastic ‘it’ in the English language, where the syntactic pattern could not be used because the structure of nya comes at the end of a word. As an alternative, the semantic classes are used to identify the pleonastic itself and the anaphoric nya. In this paper, the automatic semantic tag is used to determine the type of nya, which at the same time can determine nya as an anaphor candidate. The new algorithms and MalayAR architecture are proposed. The results of the F-measure showed the detection of clitic nya as a separate word achieved a perfect 100% result. In comparison, the clitic nya as a pleonastic achieved 88%, clitic nya referring to humans achieved 94%, and clitic nya for referring to non-humans achieved 63%. The results showed that the proposed algorithms are acceptable to solve the issue of the clitic nya as pleonastic, human referral as well as non-human referral.
Keywords: Anaphora resolution, natural language processing, Malay anaphora resolution, anaphor candidate determination.

1,2Mohammad Raquibul Hossain & 1Mohd Tahir Ismail
1School of Mathematical Sciences, Universiti Sains Malaysia, Malaysia.
2Department of Applied Mathematics, Noakhali Science and Technology University, Bangladesh.;
Forecasting is a challenging task as time-series data exhibit many features that cannot be captured by a single model. Thus, many researchers have proposed some hybrid model in order to accommodate these features to improve forecasting results. This work proposes a hybrid method between Empirical Mode Decomposition (EMD) and Theta method by considering better forecasting potentiality. Both EMD and Theta are efficient methods in their own ground of tasks for decomposition and forecasting, respectively. Combining them to get better synergic outcome deserves consideration. EMD decomposed training data from each of the five FTSE 100 index (Financial Times Stock Exchange 100 Index) companies’ stock price time series data into Intrinsic Mode Functions (IMF) and a residue. Then the Theta method forecasted each decomposed subseries. Considering different forecast horizons, the effectiveness of this hybridization was evaluated through values of some conventional error measures found for test data and forecast data which is obtained by adding forecast results for all component counterparts extracted from EMD process. This study found that the proposed method produced better forecast accuracy than the other three classic methods and the hybrid EMD-ARIMA models.
Keywords: Forecasting Stock Price, Empirical Mode Decomposition (EMD), Intrinsic Mode Functions (IMF), Theta Method, Time Series.

Ashwindran Naidu Sanderasagran, Azizuddin Abd Aziz & Daing Mohamad Nafiz Daing Idris
Faculty of Mechanical and Automotive Engineering Technology, Universiti Malaysia Pahang, Malaysia.; azizuddin,
The behavior of fluid flow is a complex paradigm for cognitive interpretation and visualization. Engineers need to visualize the behavior mechanics of flow field response in order to enhance the cognitive ability in problem solving. Therefore, mixed reality related technology is the solution for enhanced virtual interactive learning environment. However, there are limited AR platforms on fluid flow interactive learning. Hence, an interactive education app is proposed for students and engineers to interact and understand the complex flow behavior pattern subjected to elementary geometry body relative to external flow. This paper presents the technical development of a real-time flow response visualization augmented reality app for computational fluid dynamics application. It is developed with the assistance of several applications such as Unity, Vuforia, and Android IDE. Particle system modules available in Unity engine are used to create a 2D flow stream domain. The flow visualization and interaction are limited to 2D and the numerical fluid continuum response was not analyzed. The physical flow response pattern of three simple geometry bodies is validated against ANSYS simulated results based on visual empirical observation. The particle size and number of particles emitted are adjusted in order to emulate the physical representation of fluid flow. Color contour is set to change according to fluid velocity. Visual validation indicates trivial dissimilarities between FLUENT generated results and flow response exhibited by the proposed augmented reality app.
Keywords: Augmented reality, computational fluid dynamics, image target, vuforia, unity engine, particle system.

1Zahra Bokaee Nezhad & 2Mohammad Ali Deihimi
1Department of Computer Engineering, Zand University, Iran
2Department of Electronics Engineering, Bahonar University, Iran;
Sarcasm is a form of communication where the individual states the opposite of what is implied. Hence, detecting a sarcastic tone is somewhat complicated due to its ambiguous nature. On the other hand, identification of sarcasm is vital to various Natural Language Processing (NLP) tasks such as sentiment analysis and text summarization. However, research on sarcasm detection in Persian is very limited. Therefore, we investigate sarcasm detection technique on Persian tweets by combining deep learning-based and machine learning-based approaches. We propose four sets of features that cover different types of sarcasm. They are deep polarity feature, sentiment feature, part of speech feature and punctuation feature. We use these features to classify tweets as sarcastic and non-sarcastic. In this study, the deep polarity feature is proposed by conducting sentiment analysis using deep neural network architecture. In addition, to extract sentiment feature we decide to create a Persian sentiment dictionary which consists of four sentiment categories. We also provide a new Persian proverb dictionary using in the preparation step to enhance the accuracy of the proposed model. The performance of our model is analyzed using several standard machine learning algorithms. The results of the experiment show our method outperforms the baseline method and reaches an accuracy of 80.82%. We also study the importance of each set of proposed features and evaluate its added value to the classification.
Keywords: Sarcasm Detection, Natural Language Processing, Machine Learning, Sentiment Analysis, Classification.

Universiti Utara Malaysia Press
 Universiti Utara Malaysia, 06010 UUM Sintok
Kedah Darul Aman, MALAYSIA
Phone: +604-928 4816, Fax : +604-928 4792

Creative Commons License

All articles published in Journal of Information and Communication Technology (JICT) are licensed under a Creative Commons Attribution 4.0 International License.

All Right Reserved. Copyright © 2010, Universiti Utara Malaysia Press