# 东方财富网,python爬虫,smtp协议实时发送xlsx至邮箱 **Repository Path**: accessible-letter-TongDaXin/EastMoney_Crawl ## Basic Information - **Project Name**: 东方财富网,python爬虫,smtp协议实时发送xlsx至邮箱 - **Description**: python爬东方财富网 - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 2 - **Created**: 2024-04-09 - **Last Updated**: 2024-04-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README [toc] *** # 一.需求 ## 一.前言 python实现东方财富网爬虫.smtp协议发送爬取数据xlsx文件到用户QQ邮箱. ![img](https://img-blog.csdnimg.cn/4ffa92d6d9d349ba95b59d8fb69faad2.jpg?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑![img](https://img-blog.csdnimg.cn/e78650601f27433ab9adb1d100d4a0d9.png)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 比值: 今日成交量和前五日成交量平均值之比 客户后续增加了一些需求: 1. 提取数据量自定义,不一定是前50个 2. 爬虫频率半分钟爬一次 3. 发送excel按主力占比大小排序 4. 输入自定义爬虫开始时间 ## 二.需求数据 ![img](https://img-blog.csdnimg.cn/1ba91ef4111641d2a893c97dab132d4f.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 这是东方财富网主页面.除了换手率和比值,其余数据都在主页面上 点开详情页,可看到换手率 ![img](https://img-blog.csdnimg.cn/16584a546382422d925269dc6bd4bd79.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 对于比值,今日成交量可通过点击股票名称看到.比如比亚迪 ![img](https://img-blog.csdnimg.cn/21e98044c3b240e5b3ab8495db25142a.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 前五日成交量则需要通过k线图获取.把鼠标放在k线图上,可看到各日成交量. # 二.爬虫部分 ## 一.api分析 ### 一.主页api 首先第一步,获取主页数据 右键->检查->输入页面上出现的股票名称可以看到 ![img](https://img-blog.csdnimg.cn/8882f82571ec4479a55f9d31b619b975.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 这个json文件里面的数据就是主页的数据.每个fxx对应不同的数据.我们可以通过人工的方式知道,每个数据对应关系. ![img](https://img-blog.csdnimg.cn/00b7636e41684be78d0d99d3b425201e.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 在headers里面可以看到请求相关信息. ![img](https://img-blog.csdnimg.cn/54793511c31443a2bb75ec044c2eace3.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 pz表示页面显示数量.pn表示页数. 请求方式为get请求.fileds后面有一堆字符.fileds表示的是返回的数据.%2C表示的是逗号, 所以我们得到了主页数据的api ``` http://push2.eastmoney.com/api/qt/clist/get?cb=jQuery112304456859063185632_1636024376116&fid=f62&po=1&pz=50&pn=1&np=1&fltt=2&invt=2&ut=b2884a393a59ad64002292a3e90d46a5&fs=m%3A0%2Bt%3A6%2Bf%3A!2,m%3A0%2Bt%3A13%2Bf%3A!2,m%3A0%2Bt%3A80%2Bf%3A!2,m%3A1%2Bt%3A2%2Bf%3A!2,m%3A1%2Bt%3A23%2Bf%3A!2,m%3A0%2Bt%3A7%2Bf%3A!2,m%3A1%2Bt%3A3%2Bf%3A!2&fields=f12,f14,f2,f3,f62,f184,f66,f69,f72,f75,f78,f81,f84,f87,f204,f205,f124,f1,f13 ``` 对于换手率,经过测试发现,不能通过主页的api增加fileds得到换手率和今日成交量.数据不对.这两个数据的接口: http://push2.eastmoney.com/api/qt/stock/get?cb=jQuery1123005706590503148967_1636431675969&fltt=2&invt=2&secid=0.002594&fields=f57%2Cf58%2Cf43%2Cf47%2Cf48%2Cf168%2Cf169%2Cf170%2Cf152&ut=b2884a393a59ad64002292a3e90d46a5&_=1636431675995 通过浏览器请求,删去不必要的参数得到 ``` "http://push2.eastmoney.com/api/qt/stock/get?fltt=2&invt=2&secid=0.002594&fields=f47,f168&ut=b2884a393a59ad64002292a3e90d46a5&cb=jQuery11230690409531564167_1636970393200" ``` secid由两部份构成,小数点之后是股票代码,小数点之前是0或者1,对应第一个页面的f13.即 ``` "http://push2.eastmoney.com/api/qt/stock/get?fltt=2&invt=2&secid=0.002594&fields=f47,f168&ut=b2884a393a59ad64002292a3e90d46a5&cb=jQuery11230690409531564167_1636970393200" ``` ### 二.k线图api 需要通过k线图获取前五日成交量的平均值 ![img](https://img-blog.csdnimg.cn/63f12a80d0f944d7a13c3f31545f0d54.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 http://push2his.eastmoney.com/api/qt/stock/kline/get?fields1=f1,f2,f3,f4,f5&fields2=f51,f52,f53,f54,f55,f56,f57,f58,f59,f60,f61&fqt=0&end=29991010&ut=fa5fd1943c7b386f172d6893dbfba10b&cb=jQuery11230953118535226849_1636434792124&klt=101&secid=0.002594&fqt=1&lmt=1000&_=1636434792125 删除不必要参数得到 ``` http://push2his.eastmoney.com/api/qt/stock/kline/get?fields1=f1,f2,f3,f4,f5&fields2=f56&fqt=0&end=29991010&ut=fa5fd1943c7b386f172d6893dbfba10b&cb=jQuery1123038436182619159487_1636971507184&klt=101&secid={f13}.{f12}&fqt=1&lmt=6 ``` fileds1是不需要的,fileds2是k线图数据.lmt表示获取数据量,由于我们只需要前5日的,所以把lmt设置为6.secid同第一步分析的secid.f56表示成交量.因此fields2把除f56的不需要的数据都删除了,方便后续处理 至此,我们获得了3个api:主页,换手率和今日成交量,k线图 ## 二.爬虫代码 ```python import requests import re import json import time # 变量很多,将其单独放在一个类里面 class Variable: def __init__(self, zhuli, zhangfu:list, chaodadan, chengjiaoliang:list, huanshoulv, num): self.zhuli = zhuli self.zhangfu = zhangfu self.chaodadan = chaodadan self.chengjiaoliang = chengjiaoliang self.huanshoulv = huanshoulv self.num = num # self.now = time.localtime().tm_mon*30 + time.localtime().tm_mday # 取近五日的,不用判断时间 self.ua = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36", "Host":"push2.eastmoney.com", "Connection":"keep-aliv", "Cache-Control":"max-age=0"} class Spider: def __init__(self, Vb:Variable): self.url = "http://data.eastmoney.com/zjlx/detail.html" self.HomePageUrl = "http://push2.eastmoney.com/api/qt/clist/get?cb=jQuery112304456859063185632_1636024376116&fid=f62&po=1&pz={pz}&pn=1&np=1&fltt=2&invt=2&ut=b2884a393a59ad64002292a3e90d46a5&fs=m%3A0%2Bt%3A6%2Bf%3A!2,m%3A0%2Bt%3A13%2Bf%3A!2,m%3A0%2Bt%3A80%2Bf%3A!2,m%3A1%2Bt%3A2%2Bf%3A!2,m%3A1%2Bt%3A23%2Bf%3A!2,m%3A0%2Bt%3A7%2Bf%3A!2,m%3A1%2Bt%3A3%2Bf%3A!2&fields=f12,f14,f2,f3,f62,f184,f66,f69,f72,f75,f78,f81,f84,f87,f204,f205,f124,f1,f13" self.klineApi = "http://push2his.eastmoney.com/api/qt/stock/kline/get?fields1=f1,f2,f3,f4,f5&fields2=f56&fqt=0&end=29991010&ut=fa5fd1943c7b386f172d6893dbfba10b&cb=jQuery1123038436182619159487_1636971507184&klt=101&secid={f13}.{f12}&fqt=1&lmt=6" self.f47Api = "http://push2.eastmoney.com/api/qt/stock/get?fltt=2&invt=2&secid={f13}.{f12}&fields=f47,f168&ut=b2884a393a59ad64002292a3e90d46a5&cb=jQuery11230690409531564167_1636970393200" self.vb = Vb self.namelist = [] self.retry = 0 def HomePageGet(self): # 获取主页数据 response = requests.get(self.HomePageUrl.format(pz=self.vb.num), headers=self.vb.ua) # 获取 data 列表 return self.dataGet(response)['diff'] def request(self, url): while True: try: response = requests.get(url, headers=self.vb.ua, verify=False, timeout=2) self.vb.retry = 0 return response except: self.vb.retry += 1 if self.vb.retry == 3: self.vb.retry = 0 return False # 前50,主力占比16.4以上,涨幅4.4以上,超大单占比9.4以上,换手率不超8%,成交量占前五日平均值的0.6-1.1 # 成交量不能筛选,得通过后续请求 # 初次筛选 def FirstClean(self, diff:list): # 主页数据清洗 nowdata = [] for i in diff: if i['f184']>self.vb.zhuli: # 主力占比 if i['f3']>self.vb.zhangfu[0] and i['f3']self.vb.chaodadan: # 超大单 nowdata.append(i) return nowdata # 筛选成交量占前五日平均值的 0.6-1.1,需要请求前五日的 ''' 成交量api:https://push2his.eastmoney.com/api/qt/stock/kline/get?fields1=f1&fields2=f55&fqt=0&end=29991010&klt=101&secid={f13}.{f12}&lmt=5 ''' # 筛选占前五日0.6-1.1 # HomePageUrl 请求不到 f47 数据,需要单独请求 def SecondClean(self, data:list): # 请求 k 线图数据,需要更改 secid, # secid 由两部份组成, f13.f12 # f13是0或者1,这个随机的,需要由上面的数据得到 result = [] for line in data: response = self.request(self.klineApi.format(f13=line['f13'], f12=line['f12'])) if not response: continue klines = self.dataGet(response)['klines'] kline = [] # for k in klines: # kline.append(float(k)) try: for i in range(5): kline.append(float(klines[i])) except: continue fiveday = sum(kline)/5 # f47 response = self.request(self.f47Api.format(f13=line['f13'], f12=line['f12'])) if not response: continue d = self.dataGet(response) if d['f168']/100>=self.vb.huanshoulv: continue today = d['f47'] # 成交量占前 5 日平均值的 0.6-1.1 value = today/fiveday if (value>=self.vb.chengjiaoliang[0] and value<=self.vb.chengjiaoliang[1]): # 股票代码,名称,最新价,今日涨跌幅,今日主力净流入,占比,换手率 result.append([line['f12'], line['f14'], line['f2'], str(line['f3'])+"%", self.Format(line['f62']), str(line['f184'])+"%", str(line['f69'])+'%', str(d['f168']/100)+"%", '%.2f'%value]) return result # 数据判断是否需要保存 def JudgeSave(self, data:list): result = [['序号', '股票代码', '名称', '最新价', '今日涨跌幅', '今日主力净流入', '今日主力占比', '超大单占比', '换手率', '比值']] # 筛选不重复的保存 count = 1 for line in data: # if line[1] not in self.namelist: # self.namelist.append(line[1]) # line.insert(0, count) # result.append(line) # count += 1 # self.namelist.append(line[1]) line.insert(0, count) result.append(line) count += 1 result[1:].sort(key=self.s, reverse=True) if len(result)!=1: # 保存文件 excelData = pd.DataFrame(result) writer = pd.ExcelWriter("股票.xlsx") excelData.to_excel(excel_writer=writer, columns=None, index=None, header=False) writer.close() return True else: return False # 返回 data 字典 def dataGet(self, response): d = re.compile('"data":({.*)}\);') data = re.findall(d, response.text)[0] data = json.loads(data) return data def Format(self, data): # 换成万或者亿 if len(str(data))>8: # 换成亿 return '%.2f' % (data/(10**8)) + '亿' elif len(str(data))==8: return '%.2f' % (data / (10 ** 7)) + '千万' elif len(str(data))>4: return '%.2f' % (data / (10 ** 4)) + '万' else: return str(data) def s(self, elem): return elem[-4] def run(self): self.data = self.HomePageGet() self.data = self.FirstClean(self.data) self.data = self.SecondClean(self.data) return self.JudgeSave(self.data.copy()) ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==) HomePageGet,FirstClean,SecondClean用于爬数据和按条件筛选数据 JudgeSave保存爬取结果为xlsx电子表格.如果保存了最新数据会返回True,没有数据返回False DataGet做简单的json数据清洗 Format数据单位换算.json里面的数据都是不带单位的 s做主力排序时候用到,被JudgeSave的line.sort调用 通过上述代码我们获得了爬虫数据.运行一下 ```python if __name__=="__main__": # 配置参数 zhuli = 16.4 zhangfu = 4 chaodadan = 9.4 # chengjiaoliang = [0.6, 1.1] chengjiaoliang = [1, 10] huanshoulv = 6 num = 50 stop = 1 vb = Variable(zhuli, zhangfu, chaodadan, chengjiaoliang, huanshoulv, num) spider = Spider(vb) spider.run() ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==) 不出意外的话,你会得到报错: ``` requests-exceptions-sslerror : dh key too small ``` 如果没有得到报错,多运行几次就能得到报错了. 加上 ```python requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += 'HIGH:!DH:!aNULL' try: requests.packages.urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST += 'HIGH:!DH:!aNULL' except AttributeError: # no pyopenssl support used / needed / available pass ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==) 这几句代码,你能得到新的警告 ``` requests.packages.urllib3.disable_warnings() ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==) 再加上这句就可以了 ![img](https://img-blog.csdnimg.cn/c233747df36b42e1b6d71ac4194b830d.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 把成交量设置范围大一点,不然没有数据 到此,爬虫部分已经写完了 # 三.发送邮箱部分 基本原理: 邮件传输协议为smtp协议和pop3协议.MIME用于邮件扩充. pop3协议用于客户端接收邮件,smtp协议用于客户端发送邮件. pop3只负责收,smtp在服务器之间可以发也可以收,但是对于客户端只能发 需要了解邮件传输协议可以翻阅其他文章或<计算机网络>书籍 需要详细了解邮件传输协议可以搜索smtp协议文档中文版 smtp服务器 | provider | SMTP server domain name | | ----------- | ----------------------- | | qq | smtp.qq.com | | Gmail | smtp.gmail.com | | outlook.com | smtp-mail.outlook.com | | Yahoo Mail | smtp.mail.yahoo.com | | Comcast | smtp.comcast.net | 上述表格列出了部分邮箱和smtp服务器域名.待会儿我们要发的是qq邮箱. smtp端口号一般为25.QQ邮箱用的端口号是465 待会儿我们要发送的邮箱是QQ邮箱.发送邮件需要打开smtp服务 登录网页版QQ邮箱,点击设置,帐户 ![img](https://img-blog.csdnimg.cn/dda9bf2234b04db692b54e2731147e71.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 下拉,找到 ![img](https://img-blog.csdnimg.cn/792845a88d5840fb8c280ddb44b845a6.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA56u55LiA56yU6K6w,size_20,color_FFFFFF,t_70,g_se,x_16)![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)编辑 开启pop3/smtp服务.发送短信即可开启smtp服务并获取授权码.如果你已经开启该服务,点击下方生成授权码即可. ```python import smtplib from email.header import Header from email.mime.text import MIMEText from email.mime.application import MIMEApplication from email.mime.multipart import MIMEMultipart class QQEmail: def __init__(self, qqCode, sender, senderName, receiver, receiverName): self.qqCode = qqCode self.sender = sender self.senderName = senderName self.receiver = receiver self.receiverName = receiverName self.MailLogin() # 创建smtp,登录 def MailLogin(self): try: self.server =smtplib.SMTP_SSL(host='smtp.qq.com', port=465) self.server.login(self.sender, self.qqCode) print("邮箱登录成功!") except Exception as e: print("smtp连接失败! msg : ", e) sys.exit(0) def MessageBuild(self): content = MIMEText(' ') # 邮件正文内容,需要可填写 message = MIMEMultipart() message.attach(content) # 附上正文内容 message['From'] = self.senderName message['To'] = self.receiverName message['Subject'] = Header('标题', 'utf-8') # 标题 xlsx = MIMEApplication(open('文件.xlsx', 'rb').read()) xlsx['Content-Type'] = 'application/octet-stream' xlsx.add_header('Content-Disposition', 'attachment', filename='股票.xlsx') message.attach(xlsx) return message def SendMail(self): try: self.server.sendmail(from_addr=self.sender, to_addrs=self.receiver, msg=self.MessageBuild().as_string()) print('已发送! ', end="") except smtplib.SMTPException as e: print("邮件发送失败! msg : ", e) ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==) qqCode是授权码 receiver可以是多个帐户,列表类型;也可以是单个帐户,字符串类型 注意:QQ邮箱对发件频率有限制.如果发件频率太高,会被短暂封禁.读者可根据需要多**添加几个发件帐号**,或者每次封禁设置几分钟暂停时间再重新登录发件.只需改变SendMail函数即可.相关代码如下: ```python def SendMail(self): retry = 0 while True: try: self.server.sendmail(from_addr=self.sender, to_addrs=self.receiver, msg=self.MessageBuild().as_string()) print('已发送! ', end="") break except smtplib.SMTPException as e: print("邮箱登录异常,正在重试!", e) self.MailLogin() retry += 1 if retry == 2: print("邮箱登录异常!五分钟后重试") time.sleep(300) retry = 0 ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==) # 四.完整代码 将上面几个模块组装起来,再加亿点点小小改动,得 ```python # -*- coding: UTF-8 -*- import sys import requests import re import json import time import pandas as pd import smtplib from email.header import Header from email.mime.text import MIMEText from email.mime.application import MIMEApplication from email.mime.multipart import MIMEMultipart ''' 需求: http://data.eastmoney.com/zjlx/detail.html 前50,主力占比16.4以上,涨幅4.4以上,超大单占比9.4以上,成交量占前五日平均值的的0.6-1.1,换手率不超8 参数我可以自行修改,出现过的就不用再出现了,邮件或其他方式提醒,实时的。 ''' ''' 参数说明: f2: 最新价 f3: 今日涨跌幅 f12: 股票代码 f14: 股票名称 今日主力净流入: f62: 净额 f184 今日超大单净流入: f66: 净额 f69: 净占比 今日大单净流入: f72: 净额 f75: 净占比 今日中单净流入: f78: 净额 f81: 净占比 今日小单净流入: f84: 净额 f87: 净占比 f168: 换手率 f47: 成交量 f124 ''' ''' f47数据不对 113行 ''' # requests-exceptions-sslerror : dh key too small requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += 'HIGH:!DH:!aNULL' try: requests.packages.urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST += 'HIGH:!DH:!aNULL' except AttributeError: # no pyopenssl support used / needed / available pass # disable warnings requests.packages.urllib3.disable_warnings() class Variable: def __init__(self, zhuli, zhangfu:list, chaodadan, chengjiaoliang:list, huanshoulv, num): self.zhuli = zhuli self.zhangfu = zhangfu self.chaodadan = chaodadan self.chengjiaoliang = chengjiaoliang self.huanshoulv = huanshoulv self.num = num # self.now = time.localtime().tm_mon*30 + time.localtime().tm_mday # 取近五日的,不用判断时间 self.ua = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36", "Host":"push2.eastmoney.com", "Connection":"keep-aliv", "Cache-Control":"max-age=0"} class Spider: def __init__(self, Vb:Variable): self.url = "http://data.eastmoney.com/zjlx/detail.html" self.HomePageUrl = "http://push2.eastmoney.com/api/qt/clist/get?cb=jQuery112304456859063185632_1636024376116&fid=f62&po=1&pz={pz}&pn=1&np=1&fltt=2&invt=2&ut=b2884a393a59ad64002292a3e90d46a5&fs=m%3A0%2Bt%3A6%2Bf%3A!2,m%3A0%2Bt%3A13%2Bf%3A!2,m%3A0%2Bt%3A80%2Bf%3A!2,m%3A1%2Bt%3A2%2Bf%3A!2,m%3A1%2Bt%3A23%2Bf%3A!2,m%3A0%2Bt%3A7%2Bf%3A!2,m%3A1%2Bt%3A3%2Bf%3A!2&fields=f12,f14,f2,f3,f62,f184,f66,f69,f72,f75,f78,f81,f84,f87,f204,f205,f124,f1,f13" self.klineApi = "http://push2his.eastmoney.com/api/qt/stock/kline/get?fields1=f1,f2,f3,f4,f5&fields2=f56&fqt=0&end=29991010&ut=fa5fd1943c7b386f172d6893dbfba10b&cb=jQuery1123038436182619159487_1636971507184&klt=101&secid={f13}.{f12}&fqt=1&lmt=6" self.f47Api = "http://push2.eastmoney.com/api/qt/stock/get?fltt=2&invt=2&secid={f13}.{f12}&fields=f47,f168&ut=b2884a393a59ad64002292a3e90d46a5&cb=jQuery11230690409531564167_1636970393200" self.vb = Vb self.namelist = [] self.retry = 0 def HomePageGet(self): # 获取主页数据 response = requests.get(self.HomePageUrl.format(pz=self.vb.num), headers=self.vb.ua) # 获取 data 列表 return self.dataGet(response)['diff'] def request(self, url): while True: try: response = requests.get(url, headers=self.vb.ua, verify=False, timeout=2) self.vb.retry = 0 return response except: self.vb.retry += 1 if self.vb.retry == 3: self.vb.retry = 0 return False # 前50,主力占比16.4以上,涨幅4.4以上,超大单占比9.4以上,换手率不超8%,成交量占前五日平均值的0.6-1.1 # 成交量不能筛选,得通过后续请求 # 初次筛选 def FirstClean(self, diff:list): # 主页数据清洗 nowdata = [] for i in diff: if i['f184']>self.vb.zhuli: # 主力占比 if i['f3']>self.vb.zhangfu[0] and i['f3']self.vb.chaodadan: # 超大单 nowdata.append(i) return nowdata # 筛选成交量占前五日平均值的 0.6-1.1,需要请求前五日的 ''' 成交量api:https://push2his.eastmoney.com/api/qt/stock/kline/get?fields1=f1&fields2=f55&fqt=0&end=29991010&klt=101&secid={f13}.{f12}&lmt=5 ''' # 筛选占前五日0.6-1.1 # HomePageUrl 请求不到 f47 数据,需要单独请求 def SecondClean(self, data:list): # 请求 k 线图数据,需要更改 secid, # secid 由两部份组成, f13.f12 # f13是0或者1,这个随机的,需要由上面的数据得到 result = [] for line in data: response = self.request(self.klineApi.format(f13=line['f13'], f12=line['f12'])) if not response: continue klines = self.dataGet(response)['klines'] kline = [] # for k in klines: # kline.append(float(k)) try: for i in range(5): kline.append(float(klines[i])) except: continue fiveday = sum(kline)/5 # f47 response = self.request(self.f47Api.format(f13=line['f13'], f12=line['f12'])) if not response: continue d = self.dataGet(response) if d['f168']/100>=self.vb.huanshoulv: continue today = d['f47'] # 成交量占前 5 日平均值的 0.6-1.1 value = today/fiveday if (value>=self.vb.chengjiaoliang[0] and value<=self.vb.chengjiaoliang[1]): # 股票代码,名称,最新价,今日涨跌幅,今日主力净流入,占比,换手率 result.append([line['f12'], line['f14'], line['f2'], str(line['f3'])+"%", self.Format(line['f62']), str(line['f184'])+"%", str(line['f69'])+'%', str(d['f168']/100)+"%", '%.2f'%value]) return result # 数据判断是否需要保存 def JudgeSave(self, data:list): result = [['序号', '股票代码', '名称', '最新价', '今日涨跌幅', '今日主力净流入', '今日主力占比', '超大单占比', '换手率', '比值']] # 筛选不重复的保存 count = 1 for line in data: # if line[1] not in self.namelist: # self.namelist.append(line[1]) # line.insert(0, count) # result.append(line) # count += 1 # self.namelist.append(line[1]) line.insert(0, count) result.append(line) count += 1 result[1:].sort(key=self.s, reverse=True) if len(result)!=1: # 保存文件 excelData = pd.DataFrame(result) writer = pd.ExcelWriter("股票.xlsx") excelData.to_excel(excel_writer=writer, columns=None, index=None, header=False) writer.close() return True else: return False # 返回 data 字典 def dataGet(self, response): d = re.compile('"data":({.*)}\);') data = re.findall(d, response.text)[0] data = json.loads(data) return data def Format(self, data): # 换成万或者亿 if len(str(data))>8: # 换成亿 return '%.2f' % (data/(10**8)) + '亿' elif len(str(data))==8: return '%.2f' % (data / (10 ** 7)) + '千万' elif len(str(data))>4: return '%.2f' % (data / (10 ** 4)) + '万' else: return str(data) def s(self, elem): return elem[-4] def run(self): self.data = self.HomePageGet() self.data = self.FirstClean(self.data) self.data = self.SecondClean(self.data) return self.JudgeSave(self.data.copy()) class QQEmail: def __init__(self, qqCode, sender, senderName, receiver, receiverName): self.qqCode = qqCode self.sender = sender self.senderName = senderName self.receiver = receiver self.receiverName = receiverName self.MailLogin() # 创建smtp,登录 def MailLogin(self): try: self.server =smtplib.SMTP_SSL(host='smtp.qq.com', port=465) self.server.login(self.sender, self.qqCode) print("邮箱登录成功!") except Exception as e: print("smtp连接失败! msg : ", e) sys.exit(0) def MessageBuild(self): content = MIMEText(' ') # 邮件正文内容,需要可填写 message = MIMEMultipart() message.attach(content) # 附上正文内容 message['From'] = self.senderName message['To'] = self.receiverName message['Subject'] = Header('股票', 'utf-8') # 标题 xlsx = MIMEApplication(open('股票.xlsx', 'rb').read()) xlsx['Content-Type'] = 'application/octet-stream' xlsx.add_header('Content-Disposition', 'attachment', filename='股票.xlsx') message.attach(xlsx) return message def SendMail(self): retry = 0 while True: try: self.server.sendmail(from_addr=self.sender, to_addrs=self.receiver, msg=self.MessageBuild().as_string()) print('已发送! ', end="") break except smtplib.SMTPException as e: print("邮箱登录异常,正在重试!", e) self.MailLogin() retry += 1 if retry == 2: print("邮箱登录异常!五分钟后重试") time.sleep(300) retry = 0 class Control(Spider, QQEmail): def __init__(self, zhuli, zhangfu, chaodadan, chengjiaoliang, huanshoulv, qqCode, sender, senderName, receiver, recieverName, num): # 变量类 self.vb = Variable(zhuli, zhangfu, chaodadan, chengjiaoliang, huanshoulv, num) # 初始化邮箱类 self.qqmail = QQEmail(qqCode, sender, senderName, receiver, recieverName) # 爬虫类 self.spider = Spider(self.vb) def main(self, stop, end): print("爬虫启动") count = 1 while True: send = self.spider.run() if send: print("获取最新股票数据!", end="") self.qqmail.SendMail() print(count) count += 1 time.sleep(stop) if time.localtime().tm_hour == end[0]: if time.localtime().tm_min == end[1]: break def getTime(t:str): return [int(a) for a in t.split(":")] if __name__ == '__main__': # 配置参数 zhuli = 16.4 zhangfu = 4 chaodadan = 9.4 # chengjiaoliang = [0.6, 1.1] chengjiaoliang = [1, 10] huanshoulv = 6 num = 50 stop = 30 # 配置邮箱 qqCode = "授权码" sender = "发送邮箱" receiver = '接收邮箱' senderName = '股票爬虫' receiverName = '接收者' # 启动时间 start = '20:57' end = '9:30' # 准时启动 start = getTime(start) end = getTime(end) # while True: # if time.localtime().tm_hour == start[0]: # if time.localtime().tm_min == start[1]: # break # else: # time.sleep(10) # 启动 control = Control(zhuli, zhangfu, chaodadan, chengjiaoliang, huanshoulv, qqCode, sender, senderName, receiver, receiverName, num) control.main(stop, end) ``` ![点击并拖拽以移动](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)