Vol.9 No.4 December
1, 2010
Research Articles:
Web Site Metadata
(pp283-301)
Erik Wilde and Anuradha Roy
The currently established formats for how a Web site can publish
metadata about a site's pages, the \code{robots.txt} file and sitemaps,
focus on how to provide information to crawlers about where to not go
and where to go on a site. This is sufficient as input for crawlers, but
does not allow Web sites to publish richer metadata about their site's
structure, such as the navigational structure. This paper looks at the
availability of Web site metadata on today's Web in terms of available
information resources and quantitative aspects of their contents. Such
an analysis of the available Web site metadata not only makes it easier
to understand what data is available today; it also serves as the
foundation for investigating what kind of information retrieval
processes could be driven by that data, and what additional data could
be provided by Web sites if they had richer data formats to publish
metadata.
An i*-based Approach for Modeling and Testing Web
Requirements
(pp302-326)
Esteban Robles Luna, Irene Garrigos, Jose-Norberto Mazon,
Juan Trujillo, and Gustavo Rossi
Web designers usually ignore how to model real user expectations and
goals, mainly due to the large and heterogeneous audience of the Web.
This fact leads to websites which are difficult to comprehend by
visitors and complex to maintain by designers; these problems could be
ameliorated if users are able to evaluate the application under
development providing their feedback. To this aim, in this paper we
present an approach for using the i* framework for modeling users' goals
with mockups and WebSpec diagrams for detailing the specification of Web
requirements, in such a way that the process of evaluating i* models for
Web applications can be automated thus improving users' feedback during
the development process. Also, as part of our development approach, we
derive the domain and navigational models by defining a set of automatic
transformations to a specific Web modeling method. Finally, we
illustrate our approach with a case study to show its applicability and
describe a prototype tool that supports the process.
A QOS Enhanced Framework and Trust Model for
Effective Web Services Selection
(pp327-346)
Zhedan Pan and Jongmoon Baik
Service Oriented Architecture (SOA) has become a
promising paradigm for software development. One of the most important
research topics in SOA is Web service selection which means to identify
best services among a bunch of services with same or similar functions
but having different QoS (Quality of Service). Many previous approaches,
such as QoS models with quality criteria and selection algorithm, have
been proposed to optimize Web service selection. However, in current
research, quality values normally come from service providers, who have
high possibility to exaggerate these values for advertisement. It is
also argued that reputation based on an average user rating is not
enough to indicate the trust degree of Web services and service
provider. In addition, handling dynamic nature of Web services is still
a challenging problem for dynamical Web service selection. In this
paper, these problems are focused. First a QoS enhanced framework for
effective Web service selection is proposed. Then a Trust model is
built, which is composed of TQoS model, Decision model and Trust
correction. It is claimed that a Web service can be regarded as trustful
if QoS values received by consumers and tested by registry are no less
than QoS values promised by providers. A prototype of the proposed
framework is implemented, including SC agent, SR agent and QoS Enhanced
SR. In addition, a scenario about a Tour agency’s Web service selection
according to its business process is implemented. To validate
effectiveness of proposed approach, we compared it with other
approaches, such as Euclid approach and Fuzzy approach. Numerical
simulation shows that proposed approach performances better other
approaches in terms of obtained quality values.
Visual Web Mining
for Website Evaluation
(pp347-368)
Victor Pascual-Cid, Ricardo Baeza-Yates, and J. Carlos
Dursteler
In this paper we present an interactive system named Website Exploration
Tool (WET) that aims at supporting the evaluation of websites through
the exploration of web data. Our prototype offers a set of coordinated
visual abstractions in the form of interactive graphs and trees to
analyse data graphs within meaningful contexts such as the website
structure and users' flow. Apart from classical approaches, our highly
interactive system introduces a wide variety of information
visualisation techniques that provide visual cues to assist in the
process of digging into usage data. Among them, we present a new
hierarchical approach for characterising users' flow which simplifies
the intricate graphs generated by real users browsing, and a technique
for extracting contextual subgraphs simplifying the task of visualising
very large websites. The interface of the system has been evaluated with
expert analysts that validated the usefulness of the tool for analysing
a wide variety of websites.
Back
to JWE Online Front Page
|