SlideShare a Scribd company logo
International Journal of Trend in Scientific Research and Development (IJTSRD)
Volume 3 Issue 5, August 2019 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470
@ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2258
The Data Records Extraction from Web Pages
Nwe Nwe Hlaing, Thi Thi Soe Nyunt, Myat Thet Nyo
Faculty of Computer Science, University of Computer Studies, Meiktila, Myanmar
How to cite this paper: Nwe Nwe Hlaing |
Thi Thi Soe Nyunt | Myat Thet Nyo "The
Data Records Extraction fromWebPages"
Published in
International
Journal of Trend in
Scientific Research
and Development
(ijtsrd), ISSN: 2456-
6470, Volume-3 |
Issue-5, August
2019, pp.2258-2262,
https://doi.org/10.31142/ijtsrd28010
Copyright © 2019 by author(s) and
International Journal ofTrend inScientific
Research and Development Journal. This
is an Open Access article distributed
under the terms of
the Creative
Commons Attribution
License (CC BY 4.0)
(http://creativecommons.org/licenses/by
/4.0)
ABSTRACT
No other medium has taken a more meaningful place in our life in sucha short
time than the world-wide largest data network, the World Wide Web.
However, when searching for information in the data network, the user is
constantly exposed to an ever-growing flood of information. This is both a
blessing and a curse at the same time. The explosive growth and popularity of
the world-wide web has resulted in a huge number of information sources on
the Internet. As web sites are getting more complicated, the construction of
web information extraction systems becomes more difficult and time-
consuming. So the scalable automatic Web Information Extraction (WIE) is
also becoming high demand. There are four levels of information extraction
from the World Wide Web such as free-text level, record level, page level and
site level. In this paper, the target extraction task is record level extraction.
KEYWORDS: Information Extraction(IE), Wrapper,DocumentObjectModelDOM
1. INTRODUCTION
The rapid development of World Wide Web has dramatically changed the way
in which information is managed and accessed. The information in Web is
increasing at a striking speed. At present, there are more than 1 billion web
sites and web information has covered all domains of human activities. This
opened the opportunity for users to benefit from the available data. So Web is
being concerned more and more.
To retrieve information on the web, people visit web sites or
browse a large number of web pages related by key words
with the help of search engines. However, manually visiting
and searching sites is very time-consuming. Some
researchers, therefore, propose to integrate useful data over
the whole Internet with uniform schemes, so, people can
easily access and query the data with relational database
techniques. At the same time, the integrated data can be
mined to provide value-added services, such as comparison
shopping. As the data sources on the Internet are scattered
and heterogeneous, it is very difficult to integrate data from
web pages. On the other hand, web pages may present
information with embedded structured data, most of which
comes from backend relational database systems. Finding a
way to extract structured data from semi structured web
pages and integrating the data with uniform schemes is
necessary.
Web information extraction (WIE) is an important task for
information integration. Multiple web pages may present
same information using completely different formats or
syntaxes, which makes integration of information a
challenging task. Structure of current web pages is more
complicated than ever and is far different from their layouts
on web browsers. Due to the heterogeneity and lack of
structure, automated discovery of targeted information
becomes a complex task.A typical web page consistsofmany
blocks or areas, e.g., main content areas, navigation areas,
advertisements, etc. For a particular application, only part of
the information is useful, and the rest are noises. Hence, it is
useful to separate these areas automatically. Web
information extraction is concerned with the extraction of
relevant information from Web pages and transforming it
into a form suitable for computerized data-processing
applications. Example applications of WIE include: price
monitoring, market analysis and portal integration.
This paper is divided into several sections. Section 2
describes the related work on theoretical background and
Web information extraction In Section 3 we discuss our
proposed methodology in detail. Section 4 discusses the
result of our experimental tests while Section 5 concludes
this paper.
2. RELATED WORK
Information extraction from web pages is an active research
area. The existing works in Web data extraction can be
classified according to their automation degree (forasurvey,
see [5]). There are several approaches [4], [6], [9], [10], [15]
for structured data extraction, which is also called wrapper
generation. The first approach [9] is to manually write an
extraction program for each web site based on observed
format patterns of the site. This manual approach is very
labor intensive and time consuming. Hence, it does not scale
to a large number of sites. The second approach [10] is
wrapper induction or wrapper learning, which is currently
the main technique. Wrapper learning works as follows: The
user first manually labels a set of trained pages. A learning
system then generates rules from the training pages. The
resulting rules are then applied to extract target items from
web pages. These methods either require prior syntactic
knowledge or substantial manual efforts.
The third approach [4] is the automatic approach. The
structured data objects on a web are normally database
records retrieved from underlying web databases and
IJTSRD28010
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2259
displayed in webpages withsomefixedtemplates.Automatic
methodsaim to find patterns/grammars from the webpages
and then use them to extract data. Examples of automatic
systems are IEPAD[4],ROADRUNNER[6]extractsatemplate
by analyzinga pair of web pages of thesame class at a time. It
uses one page to derive an initial template and then tries to
match the second page with the template. Deriving of the
initial template has to be again done manually, which is a
major limitation of this approach.
Another problem with the existing automatic approaches is
their assumption that the relevant information of a data
record is contained in a contiguous segment of HTML code,
which is not always true. MDR [1] basically exploits the
regularities in the HTML tag structure directly. MDR works
well only for table and form enwrapped records while our
method does not have this limitation. MDR algorithm makes
use of the HTML tag tree of the web page to extract data
records from the page. However, an incorrecttagtreemaybe
constructed due to the misuse of HTML tags, which in turn
makes it impossibleto extract data records correctly. DEPTA
[14] uses visual information(locationsonthescreenatwhich
the tags are rendered) to find data records. Rather than
analyzing theHTML code, thevisual informationisutilizedto
infer the structural relationship among tags and to construct
a tag tree. But this method of constructing a tag tree has the
limitation that, the tag tree can be built correctly only as long
as the browser is able to render the page correctly.
Another similar system is Vints[8]. Vints proposes an
algorithm to find SRRs (search result records) fromreturned
pages of the search engines. However,ourmethodfocuseson
list pages of same presentation template. Although some
aspects and pieces of web information extraction may be
around in various techniques, the important of this paper
focus on the some interestingfeatures of web page andblock
clustering by appearance similarity.
3. SYSTEM OVERVIEW
This section describes the proposed automatic data records
extraction from web pages. Inthissystem,therearefivesteps
to extract data record from semi structured web page. The
algorithm for the proposed system proposed system is as
follows:
Algorithm: Data Records Extraction
1. Input :HTML Web page
2. Output :Extracted data table
3. Create DOM tree for input web page P and cleaning
useless nodes.
4. Segment the web page into several raw chunks/blocks
Bi={b1,b2…bi}.
5. Filter the noisy blocks based on heuristic rules.
6. Cluster the remaining blocks based on their appearance
similarity.
7. Labeling the data attributes for extracted data record.
Figure1 The Data Records Extraction Algorithm
First of all, input HTML page is changing DOM tree and
cleaning useless node as a preprocessing step. Secondly, we
tick out several raw chunks as a first round andthenfilterthe
noisy block based on noisy features of web pages. Then,
thirdly these blocks are clustered by proposed block
clustering method. Finally extract information from input
web page.
The goal of the system is to offer the extracted data records
from web page to the information integration systemsuchas
price comparison system and recommendation system.
3.1. Features in Web Pages
Web pages are used to publish information to users, similar
to other kinds of media, such as newspaper and TV. The
designers often associate different types of information with
distinct visual characteristics (such as font, position, etc.) to
make the information on Web pages easy to understand.Asa
result, visual features are important for identifying special
information on Web pages.
Position features (PFs).Thesefeaturesindicatethelocationof
the data region on a Web page.
PF1 : Data regions are always centered horizontally.
PF2 : The size of the data region is usually large relative to
the area size of the whole page.
Sincethe data records are thecontentsinfocusonwebpages,
web page designers always have the region containing the
data records centrally and conspicuously placed on pages to
capture the user’s attention. By investigating a large number
of web pages, first, data regions are always located in the
centre section horizontally on Web pages. Second, the size of
a data region is usually large when there are enough data
records in the data region. The actual size of a data region
may change greatly because it is not only influenced by the
number of data records retrieved, but also by what
information is included in each data record.
Figure2. Layout models of data records on web pages.
Layout features (LFs). These features indicate how the data
records in the data region are typically arranged.
LF1 : The data records are usually aligned flush left in the
data region.
LF2 : All data records are adjoining.
LF3 : Adjoining data records do not overlap, and the space
between any two adjoining records is the same.
Data records are usually presented in one of the two layout
models shown in Fig. 3. In Model 1, the data records are
arranged in a single column evenly, though they may be
different in width and height. LF1 implies that the data
records have the same distance to the left boundary of the
data region. In Model 2, data recordsarearrangedinmultiple
columns, and the data records in the same column have the
same distance to theleftboundaryofthedataregion.Because
most Web pages follow the first model, we only focus on the
first model in this paper. In addition, data records do not
overlap, which means that the regions of different data
records can be separated.
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2260
Appearancefeatures (AFs). These features capturethevisual
features within data records.
AF1 : Data records are very similar in their appearances,
and the similarity includes the number of the images
they contain and the fonts they use.
AF2 : The data items of the same semantic in different data
records have similar presentations with respect to
position, size (image data item), and font (text data
item).
AF3 : The neighboring text data items of different
semantics often use distinguishable fonts.
AF1 describes the visual similarity at the data record level.
Generally, there are three types of data contents in data
records, i.e., images, plain texts (the texts without hyper
links), and link texts (the texts with hyperlinks).AF2andAF3
describe the visual similarity at the data item level. The text
data items of the same semantic always use the same font,
and the image data items of the same semantic are often
similar in size. AF3 indicates that the neighboring text data
items of different semantics often use distinguishable fonts.
3.2. DOM Tree Generation and Clean useless node
(Preprocessing)
To begin with, a DOM tree should be generated from html
tags of the page. At the same time, features/attributes about
the node should be included in thesetreenodes,respectively;
described in next section.
Pre-processing is necessary in order to clean HTML pages,
e.g., to remove header details, scripts, styles, comments,
hidden tags,
space, tag properties, empty tags, etc. In this step, the white
relax function of thejsoup parsing tool for removingcleaning
HTML tag.First of all, we need to eliminate these nodestoget
clean Html page for further processing.
3.3. Filtering Noisy Block
Web page designers tend to organize their content in a
reasonable way: giving prominence to important things and
deemphasizing the unimportant parts with proper features
such as position, size, color, word, image, link, etc. All of
product page features are related to the importance. For
example, an advertisement may contain only images but no
texts, a contact information bar may contain email, and a
navigation bar may contain quite a few hyperlinks. However,
these features have to be normalized by the featurevalues of
the whole page to reflect the image of the whole page. For
example, the LinkNum of a blockshouldbenormalizedbythe
link numbers of the whole page. Then all these features are
formulated with equation (1).
𝑓𝑖(𝑎) =
number of attributes in block i
number of these attributes in whole page
(1)
Firstly, some conclusions are given on product features. All
those conclusionsareaccordingtotheobservationofproduct
list page on web site.
1. If a block contains email elements, then it is entirely
possible a contact block.
2. If TextLen/LinkTextLen<threshold, then it is quite
possible a hub block [7].
3. If <p> is included in a block, then this block is possible
authority block[7].
4. If the normalized LinkNum > threshold, then it is quite
possible a hub block.
Accordingly, these rules are calculated into equation (2) and
then F indicates the possibility of noisy block.
F=⅀∞i.fi(b)=α1.femail(b)+α2.ftextlen/linktexlen(b)+…+ α4.flinks(b), ⅀αi=1
(2)
Where∞i is coef icient, we can set different weightsonblock
importance respectively. Additionally, all these parameters
can be adjusted to adapt to different conditions. Finally
regarding product features, an important block is extracted
for further processing. Consequently, filtering noisy blocks
can decrease the complexity of web information extraction
through narrowing down the processing scope.
3.4. Blocks Clustering for Data Region Identification
The blocks in the data region are clustered based on their
appearance similarity. Since there are three kinds of
information in data records, i.e., images, plain text and link
text, the appearancesimilarityofblocksiscomputedfromthe
three aspects. For images, we care about the size; for plain
text and link text, we care about the shared fonts. Intuitively,
if two blocks aremoresimilar on imagesize,font,theyshould
be more similar in appearance. The appearance similarity
formula between two blocks b1 and b2 is given below:
sim(b1,b2)=Wi * simImg(b1,b2)+Wpt * simPT(b1,b2)+Wlt*
simLT(b1,b2) (3)
Where simImg(b1,b2), simPT(b1,b2), and simLT(b1,b2) are
the similarity based on image size , plain text , and link text.
Wi, Wpt, and Wlt arethe weights of these similarities.Table1
gives the formulas to compute the component similarities
and the weights in different cases.
Table1. The formulas of block appearance similarity and the weights in different cases
Formulas Descriptions
𝒔𝒊𝒎𝑰𝒎𝒈(𝒃𝟏, 𝒃𝟐) =
𝑴𝒊𝒏{𝒔𝒂𝒊(𝒃 𝟏), 𝒔𝒂𝒊(𝒃 𝟐)}
𝑴𝒂𝒙{𝒔𝒂𝒊(𝒃 𝟏), 𝒔𝒂𝒊(𝒃 𝟐)}
sai(b)is total number of images in block b.
sab(b) is the total number of block b.
fnpt(b) is the total number of fonts of the plain texts in block b.
sapt(b) is the total number of the plain texts in block b.
fnlt(b) is the total number of fonts of the link texts in block b.
salt(b) is the total number of the link text in block b.
𝑾𝒊 =
𝒔𝒂𝒊(𝒃 𝟏) + 𝒔𝒂𝒊(𝒃 𝟐)
𝒔𝒂 𝒃(𝒃 𝟏) + 𝒔𝒂 𝒃(𝒃 𝟐)
𝒔𝒊𝒎𝑷𝑻(𝒃𝟏, 𝒃𝟐) =
𝑴𝒊𝒏{𝒇𝒏 𝒑𝒕(𝒃 𝟏), 𝒇𝒏 𝒑𝒕(𝒃 𝟐)}
𝑴𝒂𝒙{𝒇𝒏 𝒑𝒕(𝒃 𝟏), 𝒇𝒏 𝒑𝒕(𝒃 𝟐)}
𝑾 𝒑𝒕 =
𝒔𝒂 𝒑𝒕(𝒃 𝟏) + 𝒔𝒂 𝒑𝒕(𝒃 𝟐)
𝒔𝒂 𝒃(𝒃 𝟏) + 𝒔𝒂 𝒃(𝒃 𝟐)
𝒔𝒊𝒎𝑳𝑻(𝒃𝟏, 𝒃𝟐) =
𝑴𝒊𝒏{𝒇𝒏𝒍𝒕(𝒃 𝟏), 𝒇𝒏𝒍𝒕(𝒃 𝟐)}
𝑴𝒂𝒙{𝒇𝒏𝒍𝒕(𝒃 𝟏), 𝒇𝒏𝒍𝒕(𝒃 𝟐)}
𝑾𝒍𝒕 =
𝒔𝒂𝒍𝒕(𝒃 𝟏) + 𝒔𝒂𝒍𝒕(𝒃 𝟐)
𝒔𝒂 𝒃(𝒃 𝟏) + 𝒔𝒂 𝒃(𝒃 𝟐)
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2261
Our block clustering method consists of two steps: The first
one is to build clusters by computing the similarity among
blocks. The similarity sim(b1,b2) between two blocks bi and
bj is computed by the equation (3).The second one is to
merge the resulting clusters. The threshold is trained from
sample page. So the cluster building procedure is simplified
as follows:
Procedure BlockClustering
Put all the blocks bi into the pool;
FOR(every block bi in pool){
compute the appearance similaritysim(bi,bj)bet:twoblocks
IF(sim(bi,bj) >threshold){
group bi and bj into a new cluster;
delete bi and bj from the pool;
}
ELSE{
create a new cluster for bi;
delete bi from the pool;
}
}
The second step is to merge clusters. To determine if two
clusters must be merged, we define the cluster similarity
simCkl between two clusters Ck and Cl as
the maximum value of sim(bi,bj), for every two blocks bi∈Ck
and bj∈Cl.
Procedure BlockMerging
FOR(every cluster Ck)
{
compute the simCkl with other clusters;
IF(simCkl >threshold){
clusters Ck and Cl are merged;
}
}
4. EXPERIMENTAL RESULTS
Our experiments were testing using commercial book store
web sites collected from different web site in Table 2. The
system takes as input raw HTML pages containing multiple
data records. The measure of our method are based on three
factors, the number of actualdata recordstobeextracted,the
number of extracted data records from the list page, and the
number of correct data records extracted from the list page.
Based on these three values, precision and recall are
calculated according to the formulas:
Recall=Correct/Actual*100
Precision=Correct/Extracted*100
According to abovemeasurement, wetested web pagesfrom
various book store web sites and check each page by
manually.
Table2. Results for selected Web Site
URL Precision Recall
gobookshopping.com 100 98
yahoo.com 98 97
allbooks4less.com 100 98
amazon.com 92.4 87.5
barnes&nobels.com 98 90
Average 97.68 94.1
Figure3. Result chart for selected web sites
5. CONCLUSION
In this paper, we have presented extraction information
content from semantic structure of HTML documents. It
relays on the observation that the appearance similarity of
data record in web page. Firstly, we segment a web page into
several raw chunks. Second, filter the noisy block. Then
proposed block clustering method groups remaining blocks
with their appearance similarity for data region
identification. Our method is automatic and it generates a
reliable and accurate wrapper for web data integration
purpose. In this case, neither prior knowledge of the input
HTML page nor any training set is required. We experiment
on multiple web sites to evaluate our method and the results
prove the approach to be promising.
Acknowledgment
I would like to greatly thank my supervisor, Dr. Thi Thi Soe
Nyunt, Professor of the University of Computer Studies,
Yangon, for her valuable advice, helpful comments, her
precious time and pointing me in the right direction. I also
want to thank Daw Aye Aye Khaing, Associate Professor and
Head of English Department andDawYuYuHlaing,Associate
Professor of English Department.
References
[1]. B Liu, R. Grossman and Y. Zhai, “Mining Data Records in
Web Pages”, ACM SIGKDD Conference, 2003.
[2]. B Liu and Y. Zhai, “NET – A System for Extracting Web
Data from Flat and Nested Data Records”, WISE
Conference, 2005.
[3]. Cai, D., Yu, S., Wen, J.-R. and Ma, W.-Y., VIPS: a vision-
based page segmentation algorithm, Microsoft
Technical Report.
[4]. Chang, C-H., Lui, S-L. “IEPAD: Information Extraction
Based on Pattern Discovery”, WWW-01, 2001.
[5]. Chang, C.-H., Kayed, M., Girgis, M., and Shaalan, K.
(2006). “A survey of web information extraction
systems”, IEEE Transactions on Knowledge and Data
Engineering, 18(10):1411–1428.
[6]. Crescenzi, V. and Mecca, G. “Automatic information
extraction from large websites”, Journal of the ACM,
2004, 51(5):731–779.
[7]. D. Cai, H. Xiaofei, W. Ji-Rong, and M. Wei-Ying, “Block-
level Link Analysis”, SIGIR'04, July 25-29, 2004.
80
82
84
86
88
90
92
94
96
98
100
Precision Recall
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2262
[8]. H. Zhao, W. Meng, Z. Wu, V. Raghavan, C. Yu, “Fully
Automatic Wrapper Generation for Search Engines”,
WWW Conference, 2005.
[9]. J. Hammer, H. Garcia Molina, J. Cho, and A. Crespo,
“Extracting semi-structuredinformationfromtheweb”,
In Proceeding of the Workshop on the Management of
Semi-structured Data, 1997.
[10]. Kushmerick, N, “Wrapper Induction: Efficiency and
Expressiveness. Artificial Intelligence”, 118:15-68,
2000.
[11]. M. Kayed, C.-H. Chang, “FiVaTech: Page-Level WebData
Extraction from Template Pages”, IEEE TKDE, vol. 22,
no. 2, pp. 249-263, Feb. 2010.
[12]. Shian-Hua Lin, Jan-Ming Ho, “Discovering Informative
Content Blocks from Web Documents”, IEEE
Transactions on KnowledgeandDataEngineering,page
41-45, Jan, 2004.
[13]. Yang, Y. and Zhang, H. “HTML page analysis based on
visual cues”, Z Niu, LiuLing Dai,YuMing Zhao,
“Extraction of Informative Blocks from web pages”, in
the Proceedings of International Conference on
Advanced Language Processing and Web Information
Technology, 2008.
[14]. YuJuan Cao, ZhenDong Niu, LiuLing Dai,YuMing Zhao,
“Extraction of Informative Blocks from web pages”, in
the Proceedings of International Conference on
Advanced Language Processing and Web Information
Technology, 2008.
[15]. Y. Zhai, and B. Liu, “Web Data Extraction Based on
Partial Tree Alignment”, WWW Conference, 2005.CA:
University Science, 1989.

More Related Content

What's hot (16)

PDF
IDENTIFYING IMPORTANT FEATURES OF USERS TO IMPROVE PAGE RANKING ALGORITHMS
IJwest
 
PDF
a novel technique to pre-process web log data using sql server management studio
INFOGAIN PUBLICATION
 
PDF
A1303060109
IOSR Journals
 
PDF
B131626
IJRES Journal
 
PDF
Chinese-language literature about Wikipedia: a metaanalysis of academic searc...
Hanteng Liao
 
PDF
What do Chinese-language microblog users do with Baidu Baike and Chinese Wiki...
Hanteng Liao
 
PDF
`A Survey on approaches of Web Mining in Varied Areas
inventionjournals
 
PDF
Liao and petzold opensym berlin wikipedia geolinguistic normalization
Hanteng Liao
 
PDF
AUTOMATIC CONVERSION OF RELATIONAL DATABASES INTO ONTOLOGIES: A COMPARATIVE A...
IJwest
 
PPTX
School Of Data - mapping opencorporates networks using openrefine and Gephi
Tony Hirst
 
PPTX
Jarrar: Web 2.0 Data Mashups
Mustafa Jarrar
 
PDF
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
PDF
Student Administration System
ijtsrd
 
PPTX
Cluster Analysis
Kamal Acharya
 
PDF
Week 2 computers, web and the internet
carolyn oldham
 
PDF
Linked Data (1st Linked Data Meetup Malmö)
Anja Jentzsch
 
IDENTIFYING IMPORTANT FEATURES OF USERS TO IMPROVE PAGE RANKING ALGORITHMS
IJwest
 
a novel technique to pre-process web log data using sql server management studio
INFOGAIN PUBLICATION
 
A1303060109
IOSR Journals
 
B131626
IJRES Journal
 
Chinese-language literature about Wikipedia: a metaanalysis of academic searc...
Hanteng Liao
 
What do Chinese-language microblog users do with Baidu Baike and Chinese Wiki...
Hanteng Liao
 
`A Survey on approaches of Web Mining in Varied Areas
inventionjournals
 
Liao and petzold opensym berlin wikipedia geolinguistic normalization
Hanteng Liao
 
AUTOMATIC CONVERSION OF RELATIONAL DATABASES INTO ONTOLOGIES: A COMPARATIVE A...
IJwest
 
School Of Data - mapping opencorporates networks using openrefine and Gephi
Tony Hirst
 
Jarrar: Web 2.0 Data Mashups
Mustafa Jarrar
 
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Student Administration System
ijtsrd
 
Cluster Analysis
Kamal Acharya
 
Week 2 computers, web and the internet
carolyn oldham
 
Linked Data (1st Linked Data Meetup Malmö)
Anja Jentzsch
 

Similar to The Data Records Extraction from Web Pages (20)

PDF
Web Content Mining Based on Dom Intersection and Visual Features Concept
ijceronline
 
PDF
Study on Web Content Extraction Techniques
ijtsrd
 
PDF
International conference On Computer Science And technology
anchalsinghdm
 
PDF
Search Engine Scrapper
IRJET Journal
 
PDF
A Novel Data Extraction and Alignment Method for Web Databases
IJMER
 
PDF
H017554148
IOSR Journals
 
PDF
IRJET - Re-Ranking of Google Search Results
IRJET Journal
 
PDF
Vision Based Deep Web data Extraction on Nested Query Result Records
IJMER
 
PDF
IRJET- SVM-based Web Content Mining with Leaf Classification Unit From DOM-Tree
IRJET Journal
 
PDF
IRJET- Behaviour of Hybrid Fibre Reinforced Sintered Fly Ash Aggregate Concre...
IRJET Journal
 
PDF
Agent based Authentication for Deep Web Data Extraction
AM Publications,India
 
PDF
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
ijnlc
 
PDF
Advance Frameworks for Hidden Web Retrieval Using Innovative Vision-Based Pag...
IOSR Journals
 
PDF
content extraction
Charmi Patel
 
PDF
Review on an automatic extraction of educational digital objects and metadata...
IRJET Journal
 
PDF
Web content mining a case study for bput results
eSAT Publishing House
 
PDF
Web content minin
eSAT Journals
 
PDF
Research on classification algorithms and its impact on web mining
IAEME Publication
 
PDF
Annotation for query result records based on domain specific ontology
ijnlc
 
PDF
A Clustering Based Approach for knowledge discovery on web.
NIET Journal of Engineering & Technology (NIETJET)
 
Web Content Mining Based on Dom Intersection and Visual Features Concept
ijceronline
 
Study on Web Content Extraction Techniques
ijtsrd
 
International conference On Computer Science And technology
anchalsinghdm
 
Search Engine Scrapper
IRJET Journal
 
A Novel Data Extraction and Alignment Method for Web Databases
IJMER
 
H017554148
IOSR Journals
 
IRJET - Re-Ranking of Google Search Results
IRJET Journal
 
Vision Based Deep Web data Extraction on Nested Query Result Records
IJMER
 
IRJET- SVM-based Web Content Mining with Leaf Classification Unit From DOM-Tree
IRJET Journal
 
IRJET- Behaviour of Hybrid Fibre Reinforced Sintered Fly Ash Aggregate Concre...
IRJET Journal
 
Agent based Authentication for Deep Web Data Extraction
AM Publications,India
 
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
ijnlc
 
Advance Frameworks for Hidden Web Retrieval Using Innovative Vision-Based Pag...
IOSR Journals
 
content extraction
Charmi Patel
 
Review on an automatic extraction of educational digital objects and metadata...
IRJET Journal
 
Web content mining a case study for bput results
eSAT Publishing House
 
Web content minin
eSAT Journals
 
Research on classification algorithms and its impact on web mining
IAEME Publication
 
Annotation for query result records based on domain specific ontology
ijnlc
 
A Clustering Based Approach for knowledge discovery on web.
NIET Journal of Engineering & Technology (NIETJET)
 
Ad

More from ijtsrd (20)

PDF
A Study of School Dropout in Rural Districts of Darjeeling and Its Causes
ijtsrd
 
PDF
Pre extension Demonstration and Evaluation of Soybean Technologies in Fedis D...
ijtsrd
 
PDF
Pre extension Demonstration and Evaluation of Potato Technologies in Selected...
ijtsrd
 
PDF
Pre extension Demonstration and Evaluation of Animal Drawn Potato Digger in S...
ijtsrd
 
PDF
Pre extension Demonstration and Evaluation of Drought Tolerant and Early Matu...
ijtsrd
 
PDF
Pre extension Demonstration and Evaluation of Double Cropping Practice Legume...
ijtsrd
 
PDF
Pre extension Demonstration and Evaluation of Common Bean Technology in Low L...
ijtsrd
 
PDF
Enhancing Image Quality in Compression and Fading Channels A Wavelet Based Ap...
ijtsrd
 
PDF
Manpower Training and Employee Performance in Mellienium Ltdawka, Anambra State
ijtsrd
 
PDF
A Statistical Analysis on the Growth Rate of Selected Sectors of Nigerian Eco...
ijtsrd
 
PDF
Automatic Accident Detection and Emergency Alert System using IoT
ijtsrd
 
PDF
Corporate Social Responsibility Dimensions and Corporate Image of Selected Up...
ijtsrd
 
PDF
The Role of Media in Tribal Health and Educational Progress of Odisha
ijtsrd
 
PDF
Advancements and Future Trends in Advanced Quantum Algorithms A Prompt Scienc...
ijtsrd
 
PDF
A Study on Seismic Analysis of High Rise Building with Mass Irregularities, T...
ijtsrd
 
PDF
Descriptive Study to Assess the Knowledge of B.Sc. Interns Regarding Biomedic...
ijtsrd
 
PDF
Performance of Grid Connected Solar PV Power Plant at Clear Sky Day
ijtsrd
 
PDF
Vitiligo Treated Homoeopathically A Case Report
ijtsrd
 
PDF
Vitiligo Treated Homoeopathically A Case Report
ijtsrd
 
PDF
Uterine Fibroids Homoeopathic Perspectives
ijtsrd
 
A Study of School Dropout in Rural Districts of Darjeeling and Its Causes
ijtsrd
 
Pre extension Demonstration and Evaluation of Soybean Technologies in Fedis D...
ijtsrd
 
Pre extension Demonstration and Evaluation of Potato Technologies in Selected...
ijtsrd
 
Pre extension Demonstration and Evaluation of Animal Drawn Potato Digger in S...
ijtsrd
 
Pre extension Demonstration and Evaluation of Drought Tolerant and Early Matu...
ijtsrd
 
Pre extension Demonstration and Evaluation of Double Cropping Practice Legume...
ijtsrd
 
Pre extension Demonstration and Evaluation of Common Bean Technology in Low L...
ijtsrd
 
Enhancing Image Quality in Compression and Fading Channels A Wavelet Based Ap...
ijtsrd
 
Manpower Training and Employee Performance in Mellienium Ltdawka, Anambra State
ijtsrd
 
A Statistical Analysis on the Growth Rate of Selected Sectors of Nigerian Eco...
ijtsrd
 
Automatic Accident Detection and Emergency Alert System using IoT
ijtsrd
 
Corporate Social Responsibility Dimensions and Corporate Image of Selected Up...
ijtsrd
 
The Role of Media in Tribal Health and Educational Progress of Odisha
ijtsrd
 
Advancements and Future Trends in Advanced Quantum Algorithms A Prompt Scienc...
ijtsrd
 
A Study on Seismic Analysis of High Rise Building with Mass Irregularities, T...
ijtsrd
 
Descriptive Study to Assess the Knowledge of B.Sc. Interns Regarding Biomedic...
ijtsrd
 
Performance of Grid Connected Solar PV Power Plant at Clear Sky Day
ijtsrd
 
Vitiligo Treated Homoeopathically A Case Report
ijtsrd
 
Vitiligo Treated Homoeopathically A Case Report
ijtsrd
 
Uterine Fibroids Homoeopathic Perspectives
ijtsrd
 
Ad

Recently uploaded (20)

PPTX
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
PDF
The Constitution Review Committee (CRC) has released an updated schedule for ...
nservice241
 
PDF
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
PDF
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
PPTX
QUARTER 1 WEEK 2 PLOT, POV AND CONFLICTS
KynaParas
 
PPTX
How to Create a PDF Report in Odoo 18 - Odoo Slides
Celine George
 
PPTX
How to Create Odoo JS Dialog_Popup in Odoo 18
Celine George
 
PDF
ARAL_Orientation_Day-2-Sessions_ARAL-Readung ARAL-Mathematics ARAL-Sciencev2.pdf
JoelVilloso1
 
PPTX
GRADE-3-PPT-EVE-2025-ENG-Q1-LESSON-1.pptx
EveOdrapngimapNarido
 
PDF
Horarios de distribución de agua en julio
pegazohn1978
 
PPTX
care of patient with elimination needs.pptx
Rekhanjali Gupta
 
PDF
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
PDF
Geographical Diversity of India 100 Mcq.pdf/ 7th class new ncert /Social/Samy...
Sandeep Swamy
 
PPT
Talk on Critical Theory, Part One, Philosophy of Social Sciences
Soraj Hongladarom
 
PPTX
MENINGITIS: NURSING MANAGEMENT, BACTERIAL MENINGITIS, VIRAL MENINGITIS.pptx
PRADEEP ABOTHU
 
PPTX
Growth and development and milestones, factors
BHUVANESHWARI BADIGER
 
PPTX
Stereochemistry-Optical Isomerism in organic compoundsptx
Tarannum Nadaf-Mansuri
 
PPTX
I AM MALALA The Girl Who Stood Up for Education and was Shot by the Taliban...
Beena E S
 
PDF
Aprendendo Arquitetura Framework Salesforce - Dia 03
Mauricio Alexandre Silva
 
PDF
The Different Types of Non-Experimental Research
Thelma Villaflores
 
PPT-Q1-WK-3-ENGLISH Revised Matatag Grade 3.pptx
reijhongidayawan02
 
The Constitution Review Committee (CRC) has released an updated schedule for ...
nservice241
 
Women's Health: Essential Tips for Every Stage.pdf
Iftikhar Ahmed
 
Exploring the Different Types of Experimental Research
Thelma Villaflores
 
QUARTER 1 WEEK 2 PLOT, POV AND CONFLICTS
KynaParas
 
How to Create a PDF Report in Odoo 18 - Odoo Slides
Celine George
 
How to Create Odoo JS Dialog_Popup in Odoo 18
Celine George
 
ARAL_Orientation_Day-2-Sessions_ARAL-Readung ARAL-Mathematics ARAL-Sciencev2.pdf
JoelVilloso1
 
GRADE-3-PPT-EVE-2025-ENG-Q1-LESSON-1.pptx
EveOdrapngimapNarido
 
Horarios de distribución de agua en julio
pegazohn1978
 
care of patient with elimination needs.pptx
Rekhanjali Gupta
 
ARAL-Orientation_Morning-Session_Day-11.pdf
JoelVilloso1
 
Geographical Diversity of India 100 Mcq.pdf/ 7th class new ncert /Social/Samy...
Sandeep Swamy
 
Talk on Critical Theory, Part One, Philosophy of Social Sciences
Soraj Hongladarom
 
MENINGITIS: NURSING MANAGEMENT, BACTERIAL MENINGITIS, VIRAL MENINGITIS.pptx
PRADEEP ABOTHU
 
Growth and development and milestones, factors
BHUVANESHWARI BADIGER
 
Stereochemistry-Optical Isomerism in organic compoundsptx
Tarannum Nadaf-Mansuri
 
I AM MALALA The Girl Who Stood Up for Education and was Shot by the Taliban...
Beena E S
 
Aprendendo Arquitetura Framework Salesforce - Dia 03
Mauricio Alexandre Silva
 
The Different Types of Non-Experimental Research
Thelma Villaflores
 

The Data Records Extraction from Web Pages

  • 1. International Journal of Trend in Scientific Research and Development (IJTSRD) Volume 3 Issue 5, August 2019 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470 @ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2258 The Data Records Extraction from Web Pages Nwe Nwe Hlaing, Thi Thi Soe Nyunt, Myat Thet Nyo Faculty of Computer Science, University of Computer Studies, Meiktila, Myanmar How to cite this paper: Nwe Nwe Hlaing | Thi Thi Soe Nyunt | Myat Thet Nyo "The Data Records Extraction fromWebPages" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456- 6470, Volume-3 | Issue-5, August 2019, pp.2258-2262, https://doi.org/10.31142/ijtsrd28010 Copyright © 2019 by author(s) and International Journal ofTrend inScientific Research and Development Journal. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0) (http://creativecommons.org/licenses/by /4.0) ABSTRACT No other medium has taken a more meaningful place in our life in sucha short time than the world-wide largest data network, the World Wide Web. However, when searching for information in the data network, the user is constantly exposed to an ever-growing flood of information. This is both a blessing and a curse at the same time. The explosive growth and popularity of the world-wide web has resulted in a huge number of information sources on the Internet. As web sites are getting more complicated, the construction of web information extraction systems becomes more difficult and time- consuming. So the scalable automatic Web Information Extraction (WIE) is also becoming high demand. There are four levels of information extraction from the World Wide Web such as free-text level, record level, page level and site level. In this paper, the target extraction task is record level extraction. KEYWORDS: Information Extraction(IE), Wrapper,DocumentObjectModelDOM 1. INTRODUCTION The rapid development of World Wide Web has dramatically changed the way in which information is managed and accessed. The information in Web is increasing at a striking speed. At present, there are more than 1 billion web sites and web information has covered all domains of human activities. This opened the opportunity for users to benefit from the available data. So Web is being concerned more and more. To retrieve information on the web, people visit web sites or browse a large number of web pages related by key words with the help of search engines. However, manually visiting and searching sites is very time-consuming. Some researchers, therefore, propose to integrate useful data over the whole Internet with uniform schemes, so, people can easily access and query the data with relational database techniques. At the same time, the integrated data can be mined to provide value-added services, such as comparison shopping. As the data sources on the Internet are scattered and heterogeneous, it is very difficult to integrate data from web pages. On the other hand, web pages may present information with embedded structured data, most of which comes from backend relational database systems. Finding a way to extract structured data from semi structured web pages and integrating the data with uniform schemes is necessary. Web information extraction (WIE) is an important task for information integration. Multiple web pages may present same information using completely different formats or syntaxes, which makes integration of information a challenging task. Structure of current web pages is more complicated than ever and is far different from their layouts on web browsers. Due to the heterogeneity and lack of structure, automated discovery of targeted information becomes a complex task.A typical web page consistsofmany blocks or areas, e.g., main content areas, navigation areas, advertisements, etc. For a particular application, only part of the information is useful, and the rest are noises. Hence, it is useful to separate these areas automatically. Web information extraction is concerned with the extraction of relevant information from Web pages and transforming it into a form suitable for computerized data-processing applications. Example applications of WIE include: price monitoring, market analysis and portal integration. This paper is divided into several sections. Section 2 describes the related work on theoretical background and Web information extraction In Section 3 we discuss our proposed methodology in detail. Section 4 discusses the result of our experimental tests while Section 5 concludes this paper. 2. RELATED WORK Information extraction from web pages is an active research area. The existing works in Web data extraction can be classified according to their automation degree (forasurvey, see [5]). There are several approaches [4], [6], [9], [10], [15] for structured data extraction, which is also called wrapper generation. The first approach [9] is to manually write an extraction program for each web site based on observed format patterns of the site. This manual approach is very labor intensive and time consuming. Hence, it does not scale to a large number of sites. The second approach [10] is wrapper induction or wrapper learning, which is currently the main technique. Wrapper learning works as follows: The user first manually labels a set of trained pages. A learning system then generates rules from the training pages. The resulting rules are then applied to extract target items from web pages. These methods either require prior syntactic knowledge or substantial manual efforts. The third approach [4] is the automatic approach. The structured data objects on a web are normally database records retrieved from underlying web databases and IJTSRD28010
  • 2. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 @ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2259 displayed in webpages withsomefixedtemplates.Automatic methodsaim to find patterns/grammars from the webpages and then use them to extract data. Examples of automatic systems are IEPAD[4],ROADRUNNER[6]extractsatemplate by analyzinga pair of web pages of thesame class at a time. It uses one page to derive an initial template and then tries to match the second page with the template. Deriving of the initial template has to be again done manually, which is a major limitation of this approach. Another problem with the existing automatic approaches is their assumption that the relevant information of a data record is contained in a contiguous segment of HTML code, which is not always true. MDR [1] basically exploits the regularities in the HTML tag structure directly. MDR works well only for table and form enwrapped records while our method does not have this limitation. MDR algorithm makes use of the HTML tag tree of the web page to extract data records from the page. However, an incorrecttagtreemaybe constructed due to the misuse of HTML tags, which in turn makes it impossibleto extract data records correctly. DEPTA [14] uses visual information(locationsonthescreenatwhich the tags are rendered) to find data records. Rather than analyzing theHTML code, thevisual informationisutilizedto infer the structural relationship among tags and to construct a tag tree. But this method of constructing a tag tree has the limitation that, the tag tree can be built correctly only as long as the browser is able to render the page correctly. Another similar system is Vints[8]. Vints proposes an algorithm to find SRRs (search result records) fromreturned pages of the search engines. However,ourmethodfocuseson list pages of same presentation template. Although some aspects and pieces of web information extraction may be around in various techniques, the important of this paper focus on the some interestingfeatures of web page andblock clustering by appearance similarity. 3. SYSTEM OVERVIEW This section describes the proposed automatic data records extraction from web pages. Inthissystem,therearefivesteps to extract data record from semi structured web page. The algorithm for the proposed system proposed system is as follows: Algorithm: Data Records Extraction 1. Input :HTML Web page 2. Output :Extracted data table 3. Create DOM tree for input web page P and cleaning useless nodes. 4. Segment the web page into several raw chunks/blocks Bi={b1,b2…bi}. 5. Filter the noisy blocks based on heuristic rules. 6. Cluster the remaining blocks based on their appearance similarity. 7. Labeling the data attributes for extracted data record. Figure1 The Data Records Extraction Algorithm First of all, input HTML page is changing DOM tree and cleaning useless node as a preprocessing step. Secondly, we tick out several raw chunks as a first round andthenfilterthe noisy block based on noisy features of web pages. Then, thirdly these blocks are clustered by proposed block clustering method. Finally extract information from input web page. The goal of the system is to offer the extracted data records from web page to the information integration systemsuchas price comparison system and recommendation system. 3.1. Features in Web Pages Web pages are used to publish information to users, similar to other kinds of media, such as newspaper and TV. The designers often associate different types of information with distinct visual characteristics (such as font, position, etc.) to make the information on Web pages easy to understand.Asa result, visual features are important for identifying special information on Web pages. Position features (PFs).Thesefeaturesindicatethelocationof the data region on a Web page. PF1 : Data regions are always centered horizontally. PF2 : The size of the data region is usually large relative to the area size of the whole page. Sincethe data records are thecontentsinfocusonwebpages, web page designers always have the region containing the data records centrally and conspicuously placed on pages to capture the user’s attention. By investigating a large number of web pages, first, data regions are always located in the centre section horizontally on Web pages. Second, the size of a data region is usually large when there are enough data records in the data region. The actual size of a data region may change greatly because it is not only influenced by the number of data records retrieved, but also by what information is included in each data record. Figure2. Layout models of data records on web pages. Layout features (LFs). These features indicate how the data records in the data region are typically arranged. LF1 : The data records are usually aligned flush left in the data region. LF2 : All data records are adjoining. LF3 : Adjoining data records do not overlap, and the space between any two adjoining records is the same. Data records are usually presented in one of the two layout models shown in Fig. 3. In Model 1, the data records are arranged in a single column evenly, though they may be different in width and height. LF1 implies that the data records have the same distance to the left boundary of the data region. In Model 2, data recordsarearrangedinmultiple columns, and the data records in the same column have the same distance to theleftboundaryofthedataregion.Because most Web pages follow the first model, we only focus on the first model in this paper. In addition, data records do not overlap, which means that the regions of different data records can be separated.
  • 3. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 @ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2260 Appearancefeatures (AFs). These features capturethevisual features within data records. AF1 : Data records are very similar in their appearances, and the similarity includes the number of the images they contain and the fonts they use. AF2 : The data items of the same semantic in different data records have similar presentations with respect to position, size (image data item), and font (text data item). AF3 : The neighboring text data items of different semantics often use distinguishable fonts. AF1 describes the visual similarity at the data record level. Generally, there are three types of data contents in data records, i.e., images, plain texts (the texts without hyper links), and link texts (the texts with hyperlinks).AF2andAF3 describe the visual similarity at the data item level. The text data items of the same semantic always use the same font, and the image data items of the same semantic are often similar in size. AF3 indicates that the neighboring text data items of different semantics often use distinguishable fonts. 3.2. DOM Tree Generation and Clean useless node (Preprocessing) To begin with, a DOM tree should be generated from html tags of the page. At the same time, features/attributes about the node should be included in thesetreenodes,respectively; described in next section. Pre-processing is necessary in order to clean HTML pages, e.g., to remove header details, scripts, styles, comments, hidden tags, space, tag properties, empty tags, etc. In this step, the white relax function of thejsoup parsing tool for removingcleaning HTML tag.First of all, we need to eliminate these nodestoget clean Html page for further processing. 3.3. Filtering Noisy Block Web page designers tend to organize their content in a reasonable way: giving prominence to important things and deemphasizing the unimportant parts with proper features such as position, size, color, word, image, link, etc. All of product page features are related to the importance. For example, an advertisement may contain only images but no texts, a contact information bar may contain email, and a navigation bar may contain quite a few hyperlinks. However, these features have to be normalized by the featurevalues of the whole page to reflect the image of the whole page. For example, the LinkNum of a blockshouldbenormalizedbythe link numbers of the whole page. Then all these features are formulated with equation (1). 𝑓𝑖(𝑎) = number of attributes in block i number of these attributes in whole page (1) Firstly, some conclusions are given on product features. All those conclusionsareaccordingtotheobservationofproduct list page on web site. 1. If a block contains email elements, then it is entirely possible a contact block. 2. If TextLen/LinkTextLen<threshold, then it is quite possible a hub block [7]. 3. If <p> is included in a block, then this block is possible authority block[7]. 4. If the normalized LinkNum > threshold, then it is quite possible a hub block. Accordingly, these rules are calculated into equation (2) and then F indicates the possibility of noisy block. F=⅀∞i.fi(b)=α1.femail(b)+α2.ftextlen/linktexlen(b)+…+ α4.flinks(b), ⅀αi=1 (2) Where∞i is coef icient, we can set different weightsonblock importance respectively. Additionally, all these parameters can be adjusted to adapt to different conditions. Finally regarding product features, an important block is extracted for further processing. Consequently, filtering noisy blocks can decrease the complexity of web information extraction through narrowing down the processing scope. 3.4. Blocks Clustering for Data Region Identification The blocks in the data region are clustered based on their appearance similarity. Since there are three kinds of information in data records, i.e., images, plain text and link text, the appearancesimilarityofblocksiscomputedfromthe three aspects. For images, we care about the size; for plain text and link text, we care about the shared fonts. Intuitively, if two blocks aremoresimilar on imagesize,font,theyshould be more similar in appearance. The appearance similarity formula between two blocks b1 and b2 is given below: sim(b1,b2)=Wi * simImg(b1,b2)+Wpt * simPT(b1,b2)+Wlt* simLT(b1,b2) (3) Where simImg(b1,b2), simPT(b1,b2), and simLT(b1,b2) are the similarity based on image size , plain text , and link text. Wi, Wpt, and Wlt arethe weights of these similarities.Table1 gives the formulas to compute the component similarities and the weights in different cases. Table1. The formulas of block appearance similarity and the weights in different cases Formulas Descriptions 𝒔𝒊𝒎𝑰𝒎𝒈(𝒃𝟏, 𝒃𝟐) = 𝑴𝒊𝒏{𝒔𝒂𝒊(𝒃 𝟏), 𝒔𝒂𝒊(𝒃 𝟐)} 𝑴𝒂𝒙{𝒔𝒂𝒊(𝒃 𝟏), 𝒔𝒂𝒊(𝒃 𝟐)} sai(b)is total number of images in block b. sab(b) is the total number of block b. fnpt(b) is the total number of fonts of the plain texts in block b. sapt(b) is the total number of the plain texts in block b. fnlt(b) is the total number of fonts of the link texts in block b. salt(b) is the total number of the link text in block b. 𝑾𝒊 = 𝒔𝒂𝒊(𝒃 𝟏) + 𝒔𝒂𝒊(𝒃 𝟐) 𝒔𝒂 𝒃(𝒃 𝟏) + 𝒔𝒂 𝒃(𝒃 𝟐) 𝒔𝒊𝒎𝑷𝑻(𝒃𝟏, 𝒃𝟐) = 𝑴𝒊𝒏{𝒇𝒏 𝒑𝒕(𝒃 𝟏), 𝒇𝒏 𝒑𝒕(𝒃 𝟐)} 𝑴𝒂𝒙{𝒇𝒏 𝒑𝒕(𝒃 𝟏), 𝒇𝒏 𝒑𝒕(𝒃 𝟐)} 𝑾 𝒑𝒕 = 𝒔𝒂 𝒑𝒕(𝒃 𝟏) + 𝒔𝒂 𝒑𝒕(𝒃 𝟐) 𝒔𝒂 𝒃(𝒃 𝟏) + 𝒔𝒂 𝒃(𝒃 𝟐) 𝒔𝒊𝒎𝑳𝑻(𝒃𝟏, 𝒃𝟐) = 𝑴𝒊𝒏{𝒇𝒏𝒍𝒕(𝒃 𝟏), 𝒇𝒏𝒍𝒕(𝒃 𝟐)} 𝑴𝒂𝒙{𝒇𝒏𝒍𝒕(𝒃 𝟏), 𝒇𝒏𝒍𝒕(𝒃 𝟐)} 𝑾𝒍𝒕 = 𝒔𝒂𝒍𝒕(𝒃 𝟏) + 𝒔𝒂𝒍𝒕(𝒃 𝟐) 𝒔𝒂 𝒃(𝒃 𝟏) + 𝒔𝒂 𝒃(𝒃 𝟐)
  • 4. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 @ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2261 Our block clustering method consists of two steps: The first one is to build clusters by computing the similarity among blocks. The similarity sim(b1,b2) between two blocks bi and bj is computed by the equation (3).The second one is to merge the resulting clusters. The threshold is trained from sample page. So the cluster building procedure is simplified as follows: Procedure BlockClustering Put all the blocks bi into the pool; FOR(every block bi in pool){ compute the appearance similaritysim(bi,bj)bet:twoblocks IF(sim(bi,bj) >threshold){ group bi and bj into a new cluster; delete bi and bj from the pool; } ELSE{ create a new cluster for bi; delete bi from the pool; } } The second step is to merge clusters. To determine if two clusters must be merged, we define the cluster similarity simCkl between two clusters Ck and Cl as the maximum value of sim(bi,bj), for every two blocks bi∈Ck and bj∈Cl. Procedure BlockMerging FOR(every cluster Ck) { compute the simCkl with other clusters; IF(simCkl >threshold){ clusters Ck and Cl are merged; } } 4. EXPERIMENTAL RESULTS Our experiments were testing using commercial book store web sites collected from different web site in Table 2. The system takes as input raw HTML pages containing multiple data records. The measure of our method are based on three factors, the number of actualdata recordstobeextracted,the number of extracted data records from the list page, and the number of correct data records extracted from the list page. Based on these three values, precision and recall are calculated according to the formulas: Recall=Correct/Actual*100 Precision=Correct/Extracted*100 According to abovemeasurement, wetested web pagesfrom various book store web sites and check each page by manually. Table2. Results for selected Web Site URL Precision Recall gobookshopping.com 100 98 yahoo.com 98 97 allbooks4less.com 100 98 amazon.com 92.4 87.5 barnes&nobels.com 98 90 Average 97.68 94.1 Figure3. Result chart for selected web sites 5. CONCLUSION In this paper, we have presented extraction information content from semantic structure of HTML documents. It relays on the observation that the appearance similarity of data record in web page. Firstly, we segment a web page into several raw chunks. Second, filter the noisy block. Then proposed block clustering method groups remaining blocks with their appearance similarity for data region identification. Our method is automatic and it generates a reliable and accurate wrapper for web data integration purpose. In this case, neither prior knowledge of the input HTML page nor any training set is required. We experiment on multiple web sites to evaluate our method and the results prove the approach to be promising. Acknowledgment I would like to greatly thank my supervisor, Dr. Thi Thi Soe Nyunt, Professor of the University of Computer Studies, Yangon, for her valuable advice, helpful comments, her precious time and pointing me in the right direction. I also want to thank Daw Aye Aye Khaing, Associate Professor and Head of English Department andDawYuYuHlaing,Associate Professor of English Department. References [1]. B Liu, R. Grossman and Y. Zhai, “Mining Data Records in Web Pages”, ACM SIGKDD Conference, 2003. [2]. B Liu and Y. Zhai, “NET – A System for Extracting Web Data from Flat and Nested Data Records”, WISE Conference, 2005. [3]. Cai, D., Yu, S., Wen, J.-R. and Ma, W.-Y., VIPS: a vision- based page segmentation algorithm, Microsoft Technical Report. [4]. Chang, C-H., Lui, S-L. “IEPAD: Information Extraction Based on Pattern Discovery”, WWW-01, 2001. [5]. Chang, C.-H., Kayed, M., Girgis, M., and Shaalan, K. (2006). “A survey of web information extraction systems”, IEEE Transactions on Knowledge and Data Engineering, 18(10):1411–1428. [6]. Crescenzi, V. and Mecca, G. “Automatic information extraction from large websites”, Journal of the ACM, 2004, 51(5):731–779. [7]. D. Cai, H. Xiaofei, W. Ji-Rong, and M. Wei-Ying, “Block- level Link Analysis”, SIGIR'04, July 25-29, 2004. 80 82 84 86 88 90 92 94 96 98 100 Precision Recall
  • 5. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470 @ IJTSRD | Unique Paper ID – IJTSRD28010 | Volume – 3 | Issue – 5 | July - August 2019 Page 2262 [8]. H. Zhao, W. Meng, Z. Wu, V. Raghavan, C. Yu, “Fully Automatic Wrapper Generation for Search Engines”, WWW Conference, 2005. [9]. J. Hammer, H. Garcia Molina, J. Cho, and A. Crespo, “Extracting semi-structuredinformationfromtheweb”, In Proceeding of the Workshop on the Management of Semi-structured Data, 1997. [10]. Kushmerick, N, “Wrapper Induction: Efficiency and Expressiveness. Artificial Intelligence”, 118:15-68, 2000. [11]. M. Kayed, C.-H. Chang, “FiVaTech: Page-Level WebData Extraction from Template Pages”, IEEE TKDE, vol. 22, no. 2, pp. 249-263, Feb. 2010. [12]. Shian-Hua Lin, Jan-Ming Ho, “Discovering Informative Content Blocks from Web Documents”, IEEE Transactions on KnowledgeandDataEngineering,page 41-45, Jan, 2004. [13]. Yang, Y. and Zhang, H. “HTML page analysis based on visual cues”, Z Niu, LiuLing Dai,YuMing Zhao, “Extraction of Informative Blocks from web pages”, in the Proceedings of International Conference on Advanced Language Processing and Web Information Technology, 2008. [14]. YuJuan Cao, ZhenDong Niu, LiuLing Dai,YuMing Zhao, “Extraction of Informative Blocks from web pages”, in the Proceedings of International Conference on Advanced Language Processing and Web Information Technology, 2008. [15]. Y. Zhai, and B. Liu, “Web Data Extraction Based on Partial Tree Alignment”, WWW Conference, 2005.CA: University Science, 1989.