1 Star 0 Fork 474

碎碎的时光 / PornHubBot

Create your Gitee Account
Explore and code with more than 12 million developers,Free private repositories !:)
Sign up
Clone or Download
contribute
Sync branch
Cancel
Notice: Creating folder will generate an empty file .keep, because not support in Git
Loading...
README
MIT

GitHub forks GitHub stars GitHub license

Disclaimer: This project is intended to study the Scrapy Spider Framework and the MongoDB database, it cannot be used for commercial or other personal intentions. If used improperly, it will be the individuals bear.

  • The project is mainly used for crawling PornHub, the largest adult site in the world. In doing so it retrieves video titles, duration, mp4 link, cover url and direct PornHub url.
  • This project crawls PornHub.com quickly, but with a simple structure.
  • This project can crawl up to 5 millon PornHub videos per day, depending on your personal network. Because of my slow bandwith my results are relatively slow.
  • The crawler requests 10 threads at a time, and because of this can achieve the speed mentioned above. If your network is more performant you can request more threads and crawl a larger amount of videos per day. For the specific configuration see [pre-boot configuration]

Environment, Architecture

Language: Python2.7

Environment: MacOS, 4G RAM

Database: MongoDB

  • Mainly uses the scrapy reptile framework.
  • Join to the Spider randomly by extracted from the Cookie pool and UA pool.
  • Start_requests start five Request based on PornHub classification, and crawl the five categories at the same time.
  • Support paging crawl data, and join to the queue.

Instructions for use

Pre-boot configuration

  • Install MongoDB and start without configuration
  • Install Python dependent modules:Scrapy, pymongo, requests or pip install -r requirements.txt
  • Modify the configuration by needed, such as the interval time, the number of threads, etc.

Start up

  • cd PornHub
  • python quickstart.py

Run screenshots

Database description

The table in the database that holds the data is PhRes. The following is a field description:

PhRes table:

video_title:     The title of the video, and as a unique.
link_url:        Video jump to PornHub`s link
image_url:       Video cover link
video_duration:  The length of the video, in seconds
quality_480p:    Video 480p mp4 download address

For Chinese

  • 关注微信公众号,学习Python开发
图片名称
The MIT License (MIT) Copyright (c) 2017 xiyouMc Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

全球最大成人网站PornHub爬虫 (Scrapy、MongoDB) 一天500w的数据 expand collapse
Python
MIT
Cancel

Releases

No release

Contributors

All

Activities

Load More
can not load any more
Python
1
https://gitee.com/NetKing/pornhubbot.git
git@gitee.com:NetKing/pornhubbot.git
NetKing
pornhubbot
PornHubBot
master

Search