The International Arab Journal of 

Information Technology


January 2005, Volume 2, Number 1


Enhancing Cognitive Aspects of Software Visualization Using DocLike Modularized Graph


Shahida Sulaiman1, Norbik Bashah Idris2, and Shamsul Sahibuddin3

1Faculty of Computer Science, University Sains Malaysia, Malaysia

2Center for Advanced Software Engineering, University Technology Malaysia, Malaysia

3Faculty of Computer Science and Information System, University Technology Malaysia, Malaysia


Abstract: Understanding an existing software system to trace possible changes involved in a maintenance task can be time consuming especially if its design document is absence or out-dated. In this case, visualizing the software artifacts graphically may improve the cognition of the subject system by software maintainers. A number of tools have emerged and they generally consist of a reverse engineering environment and a viewer to visualize software artifacts such as in the form of graphs. The tools also grant structural re-documentation of existing software systems but they do not explicitly employ document-like software visualization in their methods. This paper proposes DocLike Modularized Graph method that represents the software artifacts of a reverse engineered subject system graphically, module-by-module in a document-like re-documentation environment. The method is utilized in a prototype tool named DocLike viewer that generates graphical views of a C language software system parsed by a selected C language parser. Two experiments were conducted to validate how much the proposed method could improve cognition of a subject system by software maintainers without documentation, in terms of productivity and quality. Both results deduce that the method has the potential to improve cognitive aspects of software visualization to support software maintainers in finding solutions of assigned maintenance tasks. 


Keywords: Software maintenance, software visualization, program comprehension.


Received July 21, 2003; accepted March 8, 2004



LiSER: A Software Experience Management Tool to Support Organisational Learning in Software Development Organisations


Abdulmajid Mohamed, Sai Peck Lee, and Siti Salwah Salim

Faculty of Computer Science and Information Technology, University of Malaya, Malaysia


Abstract: The efficient management of experience knowledge is vital in today’s knowledge-based economy. This paper is concerned with developing a software experience management tool as an organisational memory subsystem. The tool aims to support Knowledge Management (KM) and Organisational Learning (OL) activities in a typical software organisation. It is specifically targeted to capture the pearls of tacit knowledge in the form of Knowledge Asset (K-Asset), which only surface as the outcome of collaborative analysis and refinement of the captured knowledge. The prototype tool is based on the framework for collaborative organisational learning we developed in previous research.

 Keywords: Knowledge management, organisational memory systems, tacit knowledge, organisational learning, ontologies.

Received July 27, 2003; accepted February 9, 2004



On the Routing of the OTIS-Cube Network in Presence of Faults


Ahmad Awwad1 and Jehad Al-Sadi2

1Faculty of Computing and Information Technology, Arab Open University, Kuwait

2Faculty of Computing and Information Technology, Arab Open University, Jordan


Abstract: This paper proposes a new fault-tolerant routing algorithm for the well-known class of networks, OTIS-cube. In this new proposed algorithm, each node A starts by computing the first level unsafety set, S1A, composed of the set of unreachable direct neighbors. It then performs m-1 exchanges with its neighbors to determine the k-level unsafety sets SkA, for all 1  k m, where m is an adjustable parameter between 1 and 2n + 1. The k-level unsafety set at node A represents the set of all faulty nodes at Hamming distance k from A, which either faulty or unreachable from A due to faulty nodes or links. Equipped with these unsafety sets, we show how each node calculates numeric unsafety vectors and uses them to achieve efficient fault-tolerant routing.


Keywords: Interconnec­tion networks, OTIS-cube, fault-tolerant routing algorithm, safety vectors.


Received July 30, 2003; accepted March 20, 2004



Pure DDP-Based Cipher: Architecture Analysis, Hardware Implementation Cost and Performance up to 6.5 Gbps


Nikolay Moldovyan1, Nicolas Sklavos2, and Odysseas Koufopavlou2

1Specialized Center of Program Systems (SPECTR), Russia

2Electrical and Computer Engineering Department, University of Patras, Greece


Abstract: Using Data-Dependent (DD) Permutations (DDP) as main cryptographic primitive, a new 64-bit block cipher is presented, ten-round DDP-64. Since the sum of all outputs of the conventional DDP is a linear Boolean function, non-linear DDP-based operation F is used additionally in DDP-64. The DDP-64 is a pure DDP-based cipher, i. e., it uses only permutations and the XOR operation. The designed cipher uses very simple key scheduling that defines high performance, especially in the case of frequent key refreshing. A novel feature of DDP-64 is the use of the switchable operation preventing the weak keys. The offered high level security strength does not sacrifice the implementation performance of DDP-64. Design and hardware implementation architectures of this cipher are presented. The synthesis results for both Field Programmable Gate Arrays (FPGA) and Application Specific Integrated Circuits (ASIC) implementations prove that DDP-64 is very flexible and powerful new cipher, especially for high speed WLANs and WPANs. The achieved hardware performance up to 6.5 Gbps and the implementation area cost of DDP-64 are compared with other ciphers, used in security layers of wireless protocols (Bluetooth, WAP, OMA, UMTS and IEEE 802.11). From these comparisons, it is proven that DDP-64 is a flexible new cipher with better performance in most of the cases, suitable for wireless communications networks of present and future.


Keywords: Data-dependent permutations, hardware implementation, fast encryption, block cipher, security.

 Received August 7, 2003; accepted December 23, 2003



Frequency Domain Watermarking: An Overview


Khaled Mahmoud, Sekharjit Datta, and James Flint

Department of Electrical and Electronic Engineering, Loughborough University, UK


Abstract: With rapid growth in computer network and information technology, a large number of copyrighted works now exist digitally as a computer files, and electronic publishing is becoming more popular. These improvements in computer technology increase the problems associated with copyright enforcement and thus future developments of networked multimedia systems are conditioned by the development of efficient methods to protect ownership rights against unauthorized copying and redistribution. Digital watermarking has recently emerged as a candidate to solve this difficult problem. In the first part of this paper we introduces an overview to digital watermarking: The general framework, its main applications, the most important properties, the main aspects used to classify watermarking, and we discuss the attacks that watermarking system may face. Finally we introduce human visual system and its interaction with watermarking as well as some open problems in digital watermarking. In the second part we introduce an overview of watermarking in frequency domain. The general properties for frequency domain as well as specific properties for each sub-domain are introduced. The sub-domains considered are discrete cosine domain, discrete wavelet domain and discrete Fourier domain. We also introduce some different watermarking techniques in each category.

 Keywords: Watermarking, steganography, information hiding, frequency domain, human visual system. 

Received August 9, 2003; accepted March 8, 2004



A Survey of Distributed Query Optimization


Alaa Aljanaby1, Emad Abuelrub1, and Mohammed Odeh2

           1Computer Science Department, Zarqa Private University, Jordan

2Faculty of Computing, University of the West of England, UK

 Abstract: The distributed query optimization is one of the hardest problems in the database area. The great commercial success of database systems is partly due to the development of sophisticated query optimization technology where users pose queries in a declarative way using SQL or OQL and the optimizer of the database system finds a good way (i. e., plan) to execute these queries. The optimizer, for example, determines which indices should be used to execute a query and in which order the operations of a query (e. g.,  joins, selects, and projects) should be executed. To this end, the optimizer enumerates alternative plans, estimates the cost of every plan using a cost model, and chooses the plan with lowest cost. There has been much research into this field. In this paper, we study the problem of distributed query optimization; we focus on the basic components of the distributed query optimizer,  i. e. search space, search strategy, and cost model. A survey of the available work into this field is given. Finally, some future work is highlighted based on some recent work that uses mobile agent technologies.


Keywords: Distributed query optimization, deterministic strategies, randomized strategies.


Received October 1, 2003; accepted March 3, 2004



Using Probabilistic Unsupervised Neural Method for Lithofacies Identification


Salim Chikhi and Mohamed Batouche

Computer Science Department, University Mentouri of Constantine, Algeria

Abstract: This paper presents a probabilistic unsupervised neural method in order to construct the lithofacies of the wells HM2 and HM3 situated in the south of Algeria (Sahara). Our objective is to facilitate the experts' work in geological domain and to allow them to obtain the structure and the nature of lands around the drilling quickly. For this, we propose the use of the Self-Organized Map (SOM) of Kohonen. We introduce a set of labeled log’s data in some points of the hole. Once the obtained map is the best deployed one (the neuronal network is well adapted to the data of the wells), a probabilistic formalism is introduced to enhance the classification process. Our system  provides a lithofacies of the concerned hole in an aspect easy to read by a geology expert who identifies the potential for oil production at a given source and so forms the basis for estimating the financial returns and economic benefits. The obtained results show that the approach is robust and effective.

 Keywords: Lithofacies, differed well logging, self-organized map, probabilistic formalism, classification, underground cores.

 Received October 4, 2003; accepted January 3, 2004



Object Modeling of Filter-Oriented Systems of Attention: Possibilities of Integration


Igor Chimir1, Waheeb Abu-Dawwas2, and Raed Alqawasmi3

1Cherkassy Academy of Management, Ukraine

2Computer Information System Department, Al-Zaytoonah University, Jordan

3Odessa State Environmental University, Ukraine

Abstract: This paper introduces theoretical results of research oriented on development object-oriented models of filter systems of attention. The diagrammatical language UML is used as a means of modeling of attentional systems. Two filter-oriented hypothesis of focused attention offered by Broadbent and Treisman were chosen as prototypes. The paper includes: UML model of the structure of information in the sensory system, the classification of existing models of attention, and two UML models of the phenomenon of attention based on Broadbent's and Treisman’s hypothesis respectively. The study revealed that from the point of view of OO modeling, the model based on Broadbent’s hypothesis can be considered as a basic class, whereas the model based on Treisman’s hypothesis is its enhancement. Both UML models were used to explain results of some key experiments on dichotic listening tasks.

 Keywords: Cognitive model, models of attention, object-oriented modeling, unified modeling language. 

Received October 5, 2003; accepted March 3, 2004



Fuzzy Inference Modeling Methodology for the Simulation of Population Growth


Hassan Diab and Jean Saade

Department of Electrical and Computer Engineering, American University of Beirut, Lebanon

 Abstract: This paper presents the use of fuzzy inference to provide a viable modeling and simulation methodology for the estimation of population growth in any country or region. The study is motivated by the classical complex and time-consuming growth modeling and prediction methods. The related design issues are presented and the fuzzy inference model for population growth is derived. The human social and economic factors which affect the growth and which underly the parameters used in the classical population projection methods are fuzzified. They are then used as inputs to a fuzzy population growth model based on fuzzy inferences so as the population growth rate is evaluated. The fuzzy population model is simulated using an existing CAD tool for fuzzy inference which has been developed and described elsewhere by the authors. The results obtained using different existing defuzzification strategies and a recently introduced one are compared with the actual population growth rates in some countries.

 Keywords: Fuzzy inference, modeling, simulation, population growth, defuzzification. 

Received October 31, 2003; accepted February 24, 2004



Automark++: A Case Tool to Automatically Mark Student Java Programs


Jubair Al-Ja'afer and Khair Eddin Sabri

King Abdullah II School for Information Technology, The University of Jordan, Jordan

 Abstract: The quality assessment of a computer program is a critical process for ensuring its effectiveness. In this paper, an easy to apply tool, AUTOMARK++, is introduced to automatically evaluate the Java programs.  The marking of a program under evaluation is based on its style. AUTOMARK++ is based on Redish and Smyth tool called AUTOMARK [12]. Two modifications were made to the AUTOMARK: First, new factors have been introduced to give the new tool flexibility in evaluating object-oriented languages such as Java. Second, the new tool automatically generates a model template for program evaluation instead of writing a specific model for each program under evaluation. AUTOMARK++ has been tested on simple and complex programs and the obtained results showed that the tool is considerably useful. 

Keywords: Software engineering, style metric, software quality, Java programming language. 

Received November 2, 2003; accepted January 22, 2004