【编者按】网学网其他类别频道为大家收集整理了“面向电子商务网站的专业网络爬虫设计与实现“提供大家参考,希望对大家有所帮助!
论文字数:14299,页数:27 有开题报告,任务书
摘 要
网络爬虫是一个自动下载网页的程序,是搜索引擎的重要组成。传统爬虫从一个或若干初始网页的URL开始,获得初始网页上的URL,在抓取网页的过程中,不断从当前页面上抽取新的URL放入队列,直到该URL对列为空为止。
本文设计的这款面向电子商务网站的专业网络爬虫,只对电子商务网站进行信息搜索,让用户可以尽可能多的找到自己关心的商品信息。面向电子商务网站的专业网络爬虫的工作流程十分复杂,需要根据一定的网页分析过滤与电子商务商品信息无关的链接,保留有用的链接并将其放入等待抓取的URL队列。然后,它将根据一定的搜索策略从队列中选择下一步要抓取的网页URL,并重复上述过程,直到达到保存URL的队列为空为止。另外,所有被爬虫抓取的网页将会被系统存贮。 文章在分析网络爬虫的工作原理的基础上,结合多线程技术,设计了这个网络爬虫程序。
关键字:搜索引擎,网络爬虫,电子商务
The Topic-Specific Web Crawler of Oriented e-commerce website Design and Implementation
Abstract
Web Crawler is a procedure of automatically downloading website pages, it downloads website pages from the World Wide Web for search engine, and works as an important component of search engine. Traditional Web Crawler starts from one or several of the initial URL of a website, and get some new URLs from the website pages, in the process of continuously downloading website html pages, it finds some new URLs and determine which URLs will be added into a queue, it works until the URL Queue is empty.
The Web Crawler, which is designed by me, is to collect information on the e-commerce websites, so that users can find as much information as they concerned.
The Web Crawler which downloads e-commerce websites, has a very complicated workflow, and needs doing an analysis for the website and filter links which are unrelated to e-commerce website, then keeps the useful links and places them into the URL queue. Then, under certain searching strategy, it would choose the next URL from the queue to download the website page, and repeat this process until the URL queue is empty. In addition, all the pages are stored on the local driver.
Based on the analysis of the principle of the Web Crawler, and the multithreading technology, this article designs this Web Crawler procedure.
Key Words: Search engine, Web Crawler, E-commerce
目 录
摘 要 I
Abstract II
目 录 III
1 绪论 4
1.1 课题背景及意义 4
1.2 国内外研究现状 2
1.3 爬虫程序在电子商务的应用 3
1.4 本文所要完成的工作 4
2 网络爬虫 5
2.1 搜索引擎概述 5
2.1.1 通用搜索引擎概述 5
2.1.2 专业搜索引擎介绍 5
2.1.3 搜索引擎的性能指标 7
2.2 网络爬虫概述 9
2.2.1 网络爬虫简介 9
2.2.2 网络爬虫工作原理 9
3 专业网络爬虫的设计 10
3.1 爬虫设计原理 10
3.2 线程技术的应用 10
3.2.1 创建线程 10
3.2.2 线程间通信 11
3.3 网络爬虫结构分析 11
3.3.1 如何解析HTML 11
3.3.2 Spider程序结构 13
3.3.3 构造Spider程序 15
3.3.4 URL筛选策略 18
3.4 运行结果分析 18
结论 20
致谢 21
参考文献 22