Vol.12 No.3&4 July 1, 2013
Research Articles:
Supporting Accessibility in Web Engineering Methods: A Methodological
Approach (pp181-202)
Lourdes Moreno, Francisco Valverde, Paloma Martínez and Oscar
Pastor
Web accessibility not only
guarantees universal user access to the Web, but also provides
interesting benefits for Web development. In order to achieve the goal
of Web accessibility, an interesting approach is the incorporation of
accessibility requirements into current Web engineering methods. This
article presents the Accessibility for Web Applications (AWA) approach
with the aim of integrating accessibility into Web engineering methods.
The paper also discusses the application of the AWA approach to the
Object-Oriented Web Solutions (OOWS) engineering method to produce
accessible Web applications with a focus on navigational requirements.
In order to demonstrate the practical applicability and usefulness of
the approach, a proof of concept is described, the results of which
indicating the satisfaction of navigation accessibility requirements.
With the application of the AWA approach in the model-driven development
(MDD) method, previously-defined OOWS models have been extended with the
accessibility criteria, providing resources for the required changes in
the process.
Topical Crawling on the Web through Local Site-Searches
(pp203-214)
Yaling Liu and Arvin Agah
In this paper, we
investigate the feasibility of discovering topical resources by
combining Web searches and local site-searches. Existing techniques of
topical resource discovery consist of crawling the Web and searching the
Web. The former typically analyses linkage among Web pages to estimate
the relevance of an unseen document to a topic. The latter exploits the
indices of generic search engines to discover documents relevant to a
topic. Although the local site-search has been a simple and convenient
feature of a Web site for human users to quickly locate desired
information within the site that hosts tremendous number of documents,
this feature has been ignored by the techniques of automatic topical
resource discovery. A typical local site-search returns a list of
titles, hyperlinks, and snippets of relevant documents, that can be used
to estimate the relevance of the documents to the topic before actually
fetching the documents. We propose an operational model to make use of
this simple feature, and address how this model can be realized.
Experiments have shown that this simple but efficient approach can
provide much more precise estimations than a sophisticated intelligent
topical crawler.
A Conceptual Graph Based Approach for Mappings among Multiple Fuzzy
Ontologies
(pp215-231)
Lingyu Zhang, Yi Yan, and Z. M. Ma
Fuzzy ontology mapping is
an important tool to solve the problem of interoperation among
heterogeneous ontologies containing fuzzy information. At present, some
researches have been done to expand existing mapping methods to deal
with fuzzy ontology. However, these methods can not perform well when
creating mappings among multiple fuzzy ontologies in a specific domain.
To this end, this paper proposes a new method for fuzzy ontology mapping
called FOM-CG (Fuzzy Ontology Mapping based on Conceptual Graph). To
reduce unnecessary comparisons for multiple fuzzy ontologies in a
domain, FOM-CG firstly creates or finds out a Reference Ontology that
contains the most common and shared information. The other fuzzy
ontologies in the domain are Source Ontologies. Then, these fuzzy
ontologies are transformed into conceptual graph sets (i.e. R-set and
S-sets). Next, some algorithms are presented to create mappings among
conceptual graph sets. Finally, the obtained mappings are transformed
into the mappings among fuzzy ontologies. Experimental results with some
fuzzy ontologies from the real world indicate that FOM-CG performs
encouragingly well.
A Model for Analysing DataPportal Performance: The Biodiversity Case
(pp232-248)
Pedro L.P. Corręa, Pablo Salvanha,
Antonio M. Saraiva, Paulo Scarpelini Neto, Carlos R. Valęncio and
Rogeria C.G. de Souza
Currently, many museums,
botanic gardens and herbariums keep data of biological collections and
using computational tools researchers digitalize and provide access to
their data using data portals. The replication of databases in portals
can be accomplished through the use of protocols and data schema.
However, the implementation of this solution demands a large amount of
time, concerning both the transfer of fragments of data and processing
data within the portal. With the growth of data digitalization in
institutions, this scenario tends to be increasingly exacerbated, making
it hard to maintain the records updated on the portals. As an original
contribution, this research proposes analysing the data replication
process to evaluate the performance of portals. The Inter-American
Biodiversity Information Network (IABIN) biodiversity data portal of
pollinators was used as a study case, which supports both situations:
conventional data replication of records of specimen occurrences and
interactions between them. With the results of this research, it is
possible to simulate a situation before its implementation, thus
predicting the performance of replication operations. Additionally,
these results may contribute to future improvements to this process, in
order to decrease the time required to make the data available in
portals.
A Hybrid Approach Using PSO and K-Means for Semantic Clustering of Web
Documents (pp249-264)
J. Avanija and K. Ramar
With the massive growth
and large volume of the web it is very difficult to recover results
based on the user preferences. The next generation web architecture,
semantic web reduces the burden of the user by performing search based
on semantics instead of keywords. Even in the context of semantic
technologies optimization problem occurs but rarely considered. In this
paper Document clustering is applied to recover relevant documents. We
propose a ontology based clustering algorithm using semantic similarity
measure and Particle Swarm Optimization(PSO), which is applied to the
annotated documents for optimizing the result. The proposed method uses
Jena API and GATE tool API and the documents can be recovered based on
their annotation features and relations. A preliminary experiment
comparing the proposed method with K-Means shows that the proposed
method is feasible and performs better than K-Means.
Slash-Based Relevance Propagation Model for Topic Distillation
(pp265-290)
Mohammad A. Golshani, Ali M. ZarehBidoki, and Vali
Derhami
An efficient and effective ranking mechanism in the search engines
remains as a challenging problem. In recent years, a few relevance
propagation models like Hyperlink-based score propagation,
Hyperlink-based term propagation, and Popularity-based propagation
models have been proposed. In this paper, we will give a comprehensive
study of the relevance propagation technologies for Web information
retrieval and conduct both theoretical and experimental evaluations over
these models to know which model is more effective and efficient. We
also propose a new relevance propagation model based on content, link
structure (web graph), and number of slashes in the URL. It propagates
content and the number of slashes as scores through the link structure.
The goal is to find more relevant web pages to the user query. To
compare relevance propagation models, Letor 3.0- a standard web test
collection- was used in the experiments. We have concluded that using
number of slashes in the propagation process provides improvement in Web
information retrieval accuracy.
A Secure Proxy-Based Cross-Domain Communication for Web Mashup
(pp291-316)
Shun-Wen Hsiao, Yeali S. Sun, and Meng Chang Chen
A web mashup is a web
application that integrates content from heterogeneous sources to
provide users with an integrated and seamless browsing experience.
Client-side mashups differ from server-side mashups in that the content
is integrated in the browser using the client-side scripts. However, the
legacy same origin policy implemented by the current browsers cannot
provide a flexible client-side communication mechanism to exchange
information between resources from different sources. To address this
problem, we propose a secure client-side cross-domain communication
mechanism facilitated by a trusted proxy and the HTML 5 postMessage
method. The proxy-based model supports fine-grained access control for
elements that belong to different sources in web mashups; and the design
guarantees the confidentiality, integrity, and authenticity during
cross-domain communications. The proxy-based design also allows users to
browse mashups without installing browser plug-ins. For mashups
developers, the provided API minimizes the amount of code modification.
The results of experiments demonstrate that the overhead incurred by our
proxy model is low and reasonable. We anticipate the proxy-based design
can help the mashup platform providers to provide a better solution to
the mashup developers and users.
AlexandRIA: A Visual Tool for Generating Multi-device Rich Internet
Applications (pp317-359)
Luis O. Colombo-Mendoza, Giner
Alor-Hernández, Alejandro Rodríguez-González and Ricardo Colomo-Palacios
Rich Internet
Applications (RIAs) Engineering is an emerging area of Software
Engineering, which still lacks of adequate development approaches and
tools for support compared to Web Engineering. Therefore, in most cases
the development of RIAs is performed in an ad-hoc manner and it is just
driven by a set of new frameworks, which are mainly classified into
JavaScript-based and non-JavaScript-based frameworks. RIAs development
involves design principles of Web and desktop applications because RIAs,
which are a new generation of Internet applications, combine behaviours
and features of these two kinds of applications. Furthermore, mobile
devices such as smartphones and tablet computers are also being involved
in RIAs development because of the growing demand for ubiquitous Web 2.0
applications; therefore, RIAs are known as multi-device RIAs. During the
last few years different contributions have arisen with the aim of
bridging the gap between the Web and the RIAs engineering support. These
proposals which are either: 1) extensions of existing methodologies for
Web and hypermedia applications development, or 2) Model-driven
Development (MDD) methods for rich Graphic User Interfaces (GUIs)
designing, which do not cover multi-device RIAs development.
Furthermore, some proposals lack of support tools. Taking this into
account, in this paper we propose a visual tool that implements a GUI
pattern-based approach for code generation of multi-device RIAs. This
visual tool called AlexandRIA is a source and native code generator for
Rapid Applications Development (RAD), which allows automatically
generating code based on a set of preferences selected throughout a
wizard. In order to validate our proposal, two cloud services APIs-based
multi-device RIAs are generated using AlexandRIA. Finally, a
qualitative/quantitative evaluation was performed in order to accurate
the legitimacy of our proposal against other similar academic and
commercial proposals.
Book Review:
On Harnessing Green IT – Principles and Practices (eds: San
Murugesan & G.R. Ganadharan) (pp360-380)
Bebo White
Back
to JWE Online Front Page
|