# API_first
**Repository Path**: shuimushisan/api_first
## Basic Information
- **Project Name**: API_first
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2020-10-24
- **Last Updated**: 2020-12-19
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# API学习
* [一、学习心得](#learn)
----------------------代码演示---------------------------
* [二、人脸识别](#face)
* [三、计算机视觉](#vision)
---
一、学习心得
#### 关于API
一开始最担心的是Python这门课能不能学懂,谁知道最后让我最最最最最最让我头疼的一门成了API,甚至有点想重回大一体验网页制作的“快乐”(假的)。
#### 关于困难及解决方法
**1. 课堂中跟不上节奏**
会和几个熟悉的朋友以**手机拍照以及录短视频**的方式及时录下一些不能及时吸收的知识点,课后在复习讲义时将视频重新看一次,重新吸收老师所讲的重点以及练习。
**2. 练习讲义时经常失败**
俗话说得好,失败乃成功之母。404、400、xx is not define等等已经让我看得各种错误原因都快倒背如流了。但是总是有方法解决的。
- **直接上相关官网看教程文档**,比如微软的[认知服务API参考](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237),【不是固定看这一个,而是删除人脸时看delete face;增加人脸时看create等等,要自己找找目录】里面有各种回应和错误代码的原因分析,当然了,有时候它的解释就像在自说自话,重复错误代码语句,没什么参考性。

- **到bing.com搜一搜**,这么多人写代码,总会有人和你的代码出现一样的错误,多看多学。

- **和同学一起做**,大概是效率最高的一种解决方法了吧,虽然有时同样的代码,两个人会做出不一样的错误结果,但是三个臭皮匠肯定打得过诸葛亮,互相检查代码,尝试检查对方的错误原因,也许会解决自己的错误,互相踩踩对方的坑,会比自己埋头苦干的效率高得多。

- **上一周的作业讲义对比**,检查你写的和老师提供的有什么不同,或者上下文对比,查漏补缺,成功率会高很多。
- **如果实在做不出来就休息休息,听听音乐**,这不是逼逼自己就能做出来的的,让脑子休息一下,不然一直烦得要死,根本静不下心来找自己的错误的,只能浪费时间!!!
#### 学习反馈
- 从一开始的看不懂讲义,无从下手,到仅能完成到delete face再到现在能够完成整个文档的80%左右,能力还是有所提升的吧?一开始看见错误提示会非常慌张,不知道怎么办,现在已经开始知道怎么看提示寻找错误的行数在哪里,逐渐开始明白一些常见的错误要如何去修改,比如拼写的错误、忘记定义模块、漏写部分代码等等。而且真真真真的能感受到老师之前所说的“你们以后看见这个'200'会很开心的”,在成功的一刻是真的会有一丝丝成就感,当然更多的是满脑子的——“解脱了”。
---
二、人脸识别
### Azure 认知服务-人脸演示
#### 1. 人脸验证
**(1). create facelist**
- 请求
```python
import requests
# 1、create 列表
# faceListId
import requests
import json
faceListId = "blackpink0"
create_facelists_url = "https://api-gzf123.cognitiveservices.azure.com/face/v1.0/facelists/{}"
subscription_key = "9415313ee78e4d79a0d6cf603d0177f0"
assert subscription_key
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
data = {
"name":"blackpink",
"userData":"共4人,4个女生",
"recognitionModel":"recognition_03"
}
r_create = requests.put(create_facelists_url.format(faceListId),headers=headers,json=data)
```
- 返回结果
```python
r_create
r_create.content
b''
```
**(2). get facelist**
- 请求
```python
# 检查你的facelist的信息
get_facelist_url = "https://api-gzf123.cognitiveservices.azure.com/face/v1.0/facelists/blackpink0"
r_get_facelist = requests.get(get_facelist_url.format(faceListId),headers=headers,params=data)#学生填写
r_get_facelist.json()
```
- 返回结果
```python
{'persistedFaces': [],
'faceListId': 'blackpink0',
'name': 'blackpink',
'userData': '共4人,4个女生'}
```
#### 2. 增加人脸
**(1). add face**
```python
#先加一张脸试试
# 2、Add face
import requests
faceListId = "blackpink0"
add_face_url = 'https://api-gzf123.cognitiveservices.azure.com/face/v1.0/facelists/blackpink0'
subscription_key = "9415313ee78e4d79a0d6cf603d0177f0"
assert subscription_key
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
img_url = 'https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603259019640&di=73a5bee219d76628d7a1bb2002474f4d&imgtype=0&src=http%3A%2F%2Finews.gtimg.com%2Fnewsapp_bt%2F0%2F11590985606%2F1000.jpg'
params_add_face={
"faceListId":"who_you",
"userdata":"金智秀(JISOO)、金智妮(JENNIE)、朴彩英(ROSÉ)、LISA"
}
r = r_add_face = requests.post(add_face_url.format(faceListId),headers=headers,params=params_add_face,json={"url":img_url}) #学生填写
r
```
- 返回结果
```python
```
**(2). 扩展内容**
```python
# 封装成函数方便添加图片/函数——可以重复使用相同的功能
def AddFace(img_url=str,userData=str):
add_face_url ="https://api-gzf123.cognitiveservices.azure.com/face/v1.0/facelists/blackpink0/persistedFaces"
assert subscription_key
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
img_url = "https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603259019640&di=73a5bee219d76628d7a1bb2002474f4d&imgtype=0&src=http%3A%2F%2Finews.gtimg.com%2Fnewsapp_bt%2F0%2F11590985606%2F1000.jpg"
params_add_face={
"userData":userData
}
r_add_face = requests.post(add_face_url.format(faceListId),headers=headers,params=params_add_face,json={"url":img_url})
return r_add_face.status_code#返回出状态码
AddFace("https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603279444074&di=9cecf251429bc212a74ad7639e155094&imgtype=0&src=http%3A%2F%2Fhbimg.b0.upaiyun.com%2F83f564e864ce58a7c8fab21ae070fbb5705fd9ce57e76-kAuiwb_fw658","LISA")
```
```python
AddFace("https://ss0.bdstatic.com/70cFvHSh_Q1YnxGkpoWK1HF6hhy/it/u=2694468616,773630720&fm=26&gp=0.jpg","ROSE")
AddFace("https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603258616590&di=974a8da73c68d31a10b98a9946258479&imgtype=0&src=http%3A%2F%2Fc-ssl.duitang.com%2Fuploads%2Fitem%2F202002%2F01%2F20200201134304_veh38.jpeg","JENNIE")
AddFace("https://ss1.bdstatic.com/70cFvXSh_Q1YnxGkpoWK1HF6hhy/it/u=1905921636,416698009&fm=26&gp=0.jpg","JISOO")
```
- 返回结果
```
200
```
#### 3.删除人脸
```python
# Detect face 删除列表内人脸id
faceListId = "blackpink01"
delete_face_url = "https://api-gzf123.cognitiveservices.azure.com/face/v1.0/facelists/blcakpink0"
assert subscription_key
# 例如:删除黄志毅: {'persistedFaceId': '69103b48-b6c4-4f58-8ac1-4c8b84e56bc1','userData': '黄智毅'},
persistedFaceId = r_add_face.json()
# 直接取上面获得的ID{'persistedFaceId': 'f18450d3-60d2-45f3-a69e-783574dc3ce8'}
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
# 注意requests请求为delete
r_delete_face = requests.delete(delete_face_url.format(faceListId,persistedFaceId),headers=headers)
r
```
- 返回结果
```python
```
#### 4.人脸相似性
```python
# Detect 检测人脸的id
# replace with the string from your endpoint URL
face_api_url = 'https://api-gzf123.cognitiveservices.azure.com/face/v1.0/detect'
# 请求正文
image_url = 'https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603281964142&di=0643d065964179f24e53677b5fb53f7c&imgtype=0&src=http%3A%2F%2Fpic3.zhimg.com%2F50%2Fv2-77acb6241ff2e55930ef74e17d977903_hd.jpg'
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
# 请求参数
params = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
# 选择model
'recognitionModel':'recognition_03',#此参数需与facelist参数一致
'detectionModel':'detection_01',
# 可选参数,请仔细阅读API文档
'returnFaceAttributes':'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',}
response = requests.post(face_api_url, params=params,headers=headers, json={"url": image_url})
# json.dumps 将json--->字符串
response.json()
```
- 返回结果
```python
[{'faceId': '4beb073f-7fdf-4d37-9d61-d83c07c396d6',
'faceRectangle': {'top': 267, 'left': 118, 'width': 450, 'height': 450},
'faceAttributes': {'smile': 0.002,
'headPose': {'pitch': -16.5, 'roll': 4.8, 'yaw': -0.5},
'gender': 'female',
'age': 20.0,
'facialHair': {'moustache': 0.0, 'beard': 0.0, 'sideburns': 0.0},
'glasses': 'NoGlasses',
'emotion': {'anger': 0.0,
'contempt': 0.0,
'disgust': 0.0,
'fear': 0.0,
'happiness': 0.002,
'neutral': 0.995,
'sadness': 0.002,
'surprise': 0.0},
'blur': {'blurLevel': 'medium', 'value': 0.72},
'exposure': {'exposureLevel': 'goodExposure', 'value': 0.47},
'noise': {'noiseLevel': 'medium', 'value': 0.65},
'makeup': {'eyeMakeup': True, 'lipMakeup': True},
'accessories': [],
'occlusion': {'foreheadOccluded': False,
'eyeOccluded': False,
'mouthOccluded': False},
'hair': {'bald': 0.04,
'invisible': False,
'hairColor': [{'color': 'brown', 'confidence': 1.0},
{'color': 'red', 'confidence': 0.71},
{'color': 'black', 'confidence': 0.68},
{'color': 'blond', 'confidence': 0.11},
{'color': 'other', 'confidence': 0.1},
{'color': 'gray', 'confidence': 0.03},
{'color': 'white', 'confidence': 0.0}]}}}]
```
```python
# Detect 检测人脸的id
# replace with the string from your endpoint URL
face_api_url = 'https://api-gzf123.cognitiveservices.azure.com/face/v1.0/detect'
# 请求正文
image_url = 'https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603281964142&di=0643d065964179f24e53677b5fb53f7c&imgtype=0&src=http%3A%2F%2Fpic3.zhimg.com%2F50%2Fv2-77acb6241ff2e55930ef74e17d977903_hd.jpg'
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
# 请求参数
params = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
# 选择model
'recognitionModel':'recognition_03',#此参数需与facelist参数一致
'detectionModel':'detection_01',
# 可选参数,请仔细阅读API文档
'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
}
response = requests.post(face_api_url, params=params,
headers=headers, json={"url": image_url})
findsimilars_url = "https://api-gzf123.cognitiveservices.azure.com/face/v1.0/findsimilars"
# 请求正文 faceId需要先检测一张照片获取
data_findsimilars = {
"faceId":"4beb073f-7fdf-4d37-9d61-d83c07c396d6",#取上方的faceID
"faceListId": "blackpink0",
"maxNumOfCandidatesReturned": 10,
"mode": "matchFace"#matchPerson #一种为验证模式,一种为相似值模式
}
r_findsimilars = requests.post(findsimilars_url,headers=headers,json=data_findsimilars)
r_findsimilars
r_findsimilars.json()
```
- 返回结果
```python
[{'persistedFaceId': '4278a690-5f78-40d7-abe5-88eab5d5494f',
'confidence': 0.29269},
{'persistedFaceId': '8b1c2538-05cc-4636-8460-cfa8059d5c72',
'confidence': 0.20908},
{'persistedFaceId': '5bfad450-0274-40b5-ba9f-0e4a0af9763b',
'confidence': 0.17849},
{'persistedFaceId': 'c0ee4410-bca0-4a3f-acb5-74c8eb72f0a1',
'confidence': 0.16209},
{'persistedFaceId': 'e61d920f-f131-464a-9a43-a82babf20358',
'confidence': 0.15023},
{'persistedFaceId': '6a663a8e-725c-4c7d-bd72-5520c4a2e93e',
'confidence': 0.101},
{'persistedFaceId': '8a559e64-a44b-4a4b-84dc-e14da959da3a',
'confidence': 0.10034},
{'persistedFaceId': '0523267b-6750-4d1e-987d-674e0ecb0821',
'confidence': 0.0999},
{'persistedFaceId': '5081698a-efe4-4be9-822c-cb58f2cc4803',
'confidence': 0.09955},
{'persistedFaceId': 'd0c25191-8224-410e-9eed-df3457e0a77f',
'confidence': 0.09503}]
```
### Face++
#### 1.准备工作
```python
api_secret = "wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG"
api_key = "AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd" # Replace with a valid Subscription Key here.
```
#### 2.创建人脸
```python
# 1. FaceSet Create
import requests,json
display_name = "blackpink合集2"
outer_id = "bp2"
user_data = "4人,4个女生"
CreateFace_Url = "https://api-cn.faceplusplus.com/facepp/v3/faceset/create"
payload = {
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'display_name':display_name,
'outer_id':outer_id,
'user_data':user_data
}
r = requests.post(CreateFace_Url, params=payload)
r.json()
```
- 返回结果
```python
{'faceset_token': 'a1b72a5c07e97db3316fc5c1bb0044f7',
'time_used': 169,
'face_count': 0,
'face_added': 0,
'request_id': '1603291637,59410e98-7433-4991-aeff-37f5e1a81fe3',
'outer_id': 'bp2',
'failure_detail': []}
```
#### 3. 获取人脸信息
```python
GetDetail_Url = "https://api-cn.faceplusplus.com/facepp/v3/faceset/getdetail"
payload = {
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'outer_id':outer_id,
}
r = requests.post(GetDetail_Url,params=payload)
r.json()
```
- 返回结果
```python
{'faceset_token': 'a1b72a5c07e97db3316fc5c1bb0044f7',
'tags': '',
'time_used': 85,
'user_data': '4人,4个女生',
'display_name': 'blackpink合集2',
'face_tokens': [],
'face_count': 0,
'request_id': '1603291682,d4717677-c3ba-4b41-a493-7d0e59e81e45',
'outer_id': 'bp2'}
```
#### 4. 增加人脸信息
```python
AddFace_url = " https://api-cn.faceplusplus.com/facepp/v3/faceset/addface"
payload = {
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'faceset_token':'a1b72a5c07e97db3316fc5c1bb0044f7',
'face_tokens':'b0407b9e803ebd39d511cd7956fd5bf5',
}
r = requests.post(AddFace_url,params=payload)
r.json()
```
- 返回结果
```python
r.json()
1
r.json()
{'faceset_token': 'a1b72a5c07e97db3316fc5c1bb0044f7',
'time_used': 96,
'face_count': 0,
'face_added': 0,
'request_id': '1603295025,18e0da47-ec6f-4ecf-a055-ce20509eac97',
'outer_id': 'bp2',
'failure_detail': [{'reason': 'INVALID_FACE_TOKEN',
'face_token': 'b0407b9e803ebd39d511cd7956fd5bf5'}]}
```
#### 5. 移除人脸信息
```python
RemoveFace_url = " https://api-cn.faceplusplus.com/facepp/v3/faceset/removeface"
payload = {
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'faceset_token':'a1b72a5c07e97db3316fc5c1bb0044f7',
'face_tokens':'b0407b9e803ebd39d511cd7956fd5bf5',
}
r = requests.post(RemoveFace_url,params=payload)
r.json()
```
- 返回结果
```python
{'faceset_token': '37071d95016c1b2d81591a6f0c1681f2',
'face_removed': 1,
'time_used': 175,
'face_count': 0,
'request_id': '1602155720,7a6b9a32-08ca-4c14-981d-d4ffbd82da48',
'outer_id': '00001',
'failure_detail': []}
```
#### 6. 更新人脸信息
```python
Update_url = "https://api-cn.faceplusplus.com/facepp/v3/faceset/update"
payload = {
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'faceset_token':'a1b72a5c07e97db3316fc5c1bb0044f7',
'user_data':"4人,4女生",
}
r = requests.post(Update_url,params=payload)
r.json()
```
- 返回结果
```python
{'faceset_token': 'a1b72a5c07e97db3316fc5c1bb0044f7',
'request_id': '1603295125,6143f1f4-407a-4514-a79a-5b2d4ae190ff',
'time_used': 69,
'outer_id': 'bp2'}
```
#### 7. 人脸对比
**(1). 直接对比**
```python
import requests
Compare_url = "https://api-cn.faceplusplus.com/facepp/v3/compare"
payload ={
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'image_url1':"https://wx2.sinaimg.cn/mw690/5968c483ly1gjzep8b66uj20hs0drwfk.jpg",
'image_url2':"https://tse3-mm.cn.bing.net/th/id/OIP.ZnXeGoVYT4jQudiPOGZn3QAAAA?pid=Api&rs=1",
}
r = requests.post(Compare_url,params=payload)
r.json()
```
- 返回结果
```python
{'faces1': [{'face_rectangle': {'width': 125,
'top': 120,
'left': 211,
'height': 125},
'face_token': '0d13aca10479d1a3d6e5f416fdabec3a'}],
'faces2': [{'face_rectangle': {'width': 86,
'top': 91,
'left': 65,
'height': 86},
'face_token': '950d857df6968e2162fb6c1c48599b9d'}],
'time_used': 2439,
'thresholds': {'1e-3': 62.327, '1e-5': 73.975, '1e-4': 69.101},
'confidence': 18.503,
'image_id2': 'g6kg8zfyOouG6ftP+GvEfg==',
'image_id1': 'lZfYCnnS2JgC4JYOgK5Tqw==',
'request_id': '1603464488,1a77db2e-3add-45e7-ae14-f587215e3824'}
```
**(2). 与人脸合集对比**
```python
import requests
Detect_url = 'https://api-cn.faceplusplus.com/facepp/v3/detect'
xin = "https://wx2.sinaimg.cn/mw690/5968c483ly1gjzep8b66uj20hs0drwfk.jpg"
img_url = xin
payload = {
"image_url":"https://gss0.baidu.com/9fo3dSag_xI4khGko9WTAnF6hhy/zhidao/pic/item/7c1ed21b0ef41bd57f7f20ff57da81cb39db3d89.jpg",
'api_key': 'AAsLeQKCJe1Up-dA_ggzZGBXZq3f3brd',
'api_secret': 'wIm6i_GWAolQ_ArUGx9fqYE87YVE9MgG',
'return_attributes':'gender,age,smiling,emotion',
}
r = requests.post(Detect_url,params=payload)
r.json()
```
- 返回结果
```python
{'request_id': '1603464455,9d5b8b84-6e3f-4266-aa59-4b9ae23d41e2',
'time_used': 4093,
'faces': [{'face_token': 'a9263f8b37de34cb72b10df389325600',
'face_rectangle': {'top': 871, 'left': 1114, 'width': 824, 'height': 824},
'attributes': {'gender': {'value': 'Male'},
'age': {'value': 59},
'smile': {'value': 99.998, 'threshold': 50.0},
'emotion': {'anger': 0.0,
'disgust': 0.047,
'fear': 0.0,
'happiness': 99.945,
'neutral': 0.0,
'sadness': 0.007,
'surprise': 0.0}}}],
'image_id': 'KIOXEC2V/MyL4zuopAcNig==',
'face_num': 1}
```
### 百度智能云
#### 1. 人脸检测
```python
import requests
host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id={}&client_secret={}'
apikey = "gzrrKaNlQ9GSH3dX54Gved51"
secretkey = "rxVFw6etVdS4XzDP2lhXnqEynVWKf8vD"
response = requests.get(host.format(apikey, secretkey))
if response:
print(response.json())
# {'refresh_token': '25.28e474eb76b22b468b28bd214b4c5f5e.315360000.1918366387.282335-22838595',
# 'expires_in': 2592000,
# 'session_key': '9mzdDcZhxI0DqtPikwQHIKEoKgdnFBrsLrnaN0mbUqjzpTf/BQLtOGJhjMAbUmQN3fXWr8Bp4hkjIbggmsTDGBTb0MAaIQ==',
# 'access_token': '24.60f9411a4b4ea9f169195953c398a3e0.2592000.1605598387.282335-22838595',
# 'scope': 'public brain_all_scope vis-faceverify_faceverify_h5-face-liveness vis-faceverify_FACE_V3 vis-faceverify_idl_face_merge vis-faceverify_FACE_EFFECT wise_adapt lebo_resource_base lightservice_public hetu_basic lightcms_map_poi kaidian_kaidian ApsMisTest_Test权限 vis-classify_flower lpq_开放 cop_helloScope ApsMis_fangdi_permission smartapp_snsapi_base smartapp_mapp_dev_manage iop_autocar oauth_tp_app smartapp_smart_game_openapi oauth_sessionkey smartapp_swanid_verify smartapp_opensource_openapi smartapp_opensource_recapi fake_face_detect_开放Scope vis-ocr_虚拟人物助理 idl-video_虚拟人物助理 smartapp_component',
# 'session_secret': '3467d4c7825a546093e81c2d4249db92'}
# 1.人脸检测与属性分析
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/detect"
params = "{\"image\":\"http://img.idol001.com/origin/2017/11/18/83731b5eae305e8e9dc4ee88660015911510974229_watermark.jpg\",\"image_type\":\"URL\",\"face_field\":\"faceshape,facetype\"}"
# image_type 图片类型:
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
```
- 返回结果
```python
{'error_code': 0,
'error_msg': 'SUCCESS',
'log_id': 1019494001201,
'timestamp': 1603520482,
'cached': 0,
'result': {'face_num': 1,
'face_list': [{'face_token': 'ea408896855b398717c989bc1db786e7',
'location': {'left': 252.92,
'top': 266.62,
'width': 405,
'height': 398,
'rotation': -9},
'face_probability': 1,
'angle': {'yaw': -17.6, 'pitch': 11.4, 'roll': -8.25},
'face_shape': {'type': 'oval', 'probability': 0.52},
'face_type': {'type': 'cartoon', 'probability': 0.65}}]}}
```
#### 2.人脸对比
```python
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/match"
params = "[{\"image\": \"https://p3.ifengimg.com/2018_50/F14608222D63172539BE6227A32EF057D1AD7D38_w1024_h1312.jpg\", \"image_type\": \"URL\", \"face_type\": \"CERT\", \"quality_control\": \"LOW\"}, {\"image\": \"http://img.idol001.com/origin/2017/11/18/83731b5eae305e8e9dc4ee88660015911510974229_watermark.jpg\", \"image_type\": \"URL\", \"face_type\": \"LIVE\", \"quality_control\": \"LOW\"}]"
# face_type 人脸的类型 LIVE;IDCARD;WATERMARK;CERT;INFRARED。
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
```
- 返回结果
```python
{'error_code': 0,
'error_msg': 'SUCCESS',
'log_id': 2515258975556,
'timestamp': 1603472810,
'cached': 0,
'result': {'score': 91.79843903,
'face_list': [{'face_token': 'dd0745813c71dbbf9375c03950385539'},
{'face_token': 'ea408896855b398717c989bc1db786e7'}]}}
```
---
三、计算机视觉
## 具体结果可以在代码模块看见
### 1. 分析远程图像
```python
import requests
# If you are using a Jupyter notebook, uncomment the following line.
# %matplotlib inline
import matplotlib.pyplot as plt
import json
from PIL import Image
from io import BytesIO
# Add your Computer Vision subscription key and endpoint to your environment variables.
# if 'COMPUTER_VISION_SUBSCRIPTION_KEY' in os.environ:
# subscription_key = os.environ['COMPUTER_VISION_SUBSCRIPTION_KEY']
# else:
# print("\nSet the COMPUTER_VISION_SUBSCRIPTION_KEY environment variable.\n**Restart your shell or IDE for changes to take effect.**")
# sys.exit()
endpoint = "https://computervision-gzf.cognitiveservices.azure.com/"
# if 'COMPUTER_VISION_ENDPOINT' in os.environ:
# endpoint = os.environ['COMPUTER_VISION_ENDPOINT']
subscription_key = "15f98ea70f74425382a7f6b768cd9915"
# base url
analyze_url = endpoint+ "vision/v2.1/analyze"
# Set image_url to the URL of an image that you want to analyze.
image_url = "http://n.sinaimg.cn/fashion/transform/20170120/uK5B-fxzusxt7718219.jpg"
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
# 参数
params = {'visualFeatures': 'Categories,Description,Color'}
# 请求主体body
data = {'url': image_url}
response = requests.post(analyze_url, headers=headers,
params=params, json=data)
response.raise_for_status()
# The 'analysis' object contains various fields that describe the image. The most
# relevant caption for the image is obtained from the 'description' property.
analysis = response.json()
print(json.dumps(response.json()))
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
# Display the image and overlay it with the caption.
image = Image.open(BytesIO(requests.get(image_url).content))
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
plt.show()
```
- 返回结果
```python
{"categories": [{"name": "people_portrait", "score": 0.56640625, "detail": {"celebrities": [{"name": "Song-Kyoung Lee", "confidence": 0.9989079236984253, "faceRectangle": {"left": 170, "top": 203, "width": 208, "height": 208}}]}}], "color": {"dominantColorForeground": "Brown", "dominantColorBackground": "White", "dominantColors": ["White", "Brown"], "accentColor": "A9223D", "isBwImg": false, "isBWImg": false}, "description": {"tags": ["person", "woman", "clothing", "indoor", "girl", "beautiful", "hair", "posing", "young", "smiling", "holding", "wearing", "food", "close", "dress", "little", "phone", "eating", "standing", "plate", "shirt", "stuffed", "bear"], "captions": [{"text": "a close up of Song-Kyoung Lee", "confidence": 0.9628735499041425}]}, "requestId": "30c89ac4-a38a-4569-b146-4c6259fb2068", "metadata": {"height": 560, "width": 430, "format": "Jpeg"}}
```

### 2. 分析本地图片
```python
import os
import sys
import requests
# If you are using a Jupyter notebook, uncomment the following line.
# %matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
# Add your Computer Vision subscription key and endpoint to your environment variables.
# if 'COMPUTER_VISION_SUBSCRIPTION_KEY' in os.environ:
# subscription_key = os.environ['COMPUTER_VISION_SUBSCRIPTION_KEY']
# else:
# print("\nSet the COMPUTER_VISION_SUBSCRIPTION_KEY environment variable.\n**Restart your shell or IDE for changes to take effect.**")
# sys.exit()
# if 'COMPUTER_VISION_ENDPOINT' in os.environ:
# endpoint = os.environ['COMPUTER_VISION_ENDPOINT']
endpoint = "https://computervision-gzf.cognitiveservices.azure.com/"
analyze_url = endpoint + "vision/v2.1/analyze"
# Set image_path to the local path of an image that you want to analyze.
image_path = "D:\qbhn.jpg"
# Read the image into a byte array
image_data = open(image_path, "rb").read()
headers = {'Ocp-Apim-Subscription-Key': "15f98ea70f74425382a7f6b768cd9915",
'Content-Type': 'application/octet-stream'}
params = {'visualFeatures': 'Categories,Description,Color'}
response = requests.post(
analyze_url, headers=headers, params=params, data=image_data)
response.raise_for_status()
# The 'analysis' object contains various fields that describe the image. The most
# relevant caption for the image is obtained from the 'description' property.
analysis = response.json()
print(analysis)
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
# Display the image and overlay it with the caption.
image = Image.open(BytesIO(image_data))
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
```
- 返回结果
```python
{'categories': [{'name': 'people_portrait', 'score': 0.49609375}], 'color': {'dominantColorForeground': 'White', 'dominantColorBackground': 'White', 'dominantColors': ['White'], 'accentColor': '945437', 'isBwImg': False, 'isBWImg': False}, 'description': {'tags': ['person', 'woman', 'wearing', 'hair', 'young', 'posing', 'smiling', 'holding', 'dress', 'photo', 'shirt', 'close', 'girl', 'blue', 'head', 'phone', 'eyes', 'dressed', 'neck', 'standing'], 'captions': [{'text': 'a close up of a woman', 'confidence': 0.9599742870326996}]}, 'requestId': '3a014fdc-a694-4d22-a1ef-e91bda5406fc', 'metadata': {'height': 1200, 'width': 1920, 'format': 'Jpeg'}}
```

### 3. 生成缩略图
```python
import os
import sys
import requests
# If you are using a Jupyter notebook, uncomment the following line.
# %matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
# Add your Computer Vision subscription key and endpoint to your environment variables.
# if 'COMPUTER_VISION_SUBSCRIPTION_KEY' in os.environ:
# subscription_key = os.environ['COMPUTER_VISION_SUBSCRIPTION_KEY']
# else:
# print("\nSet the COMPUTER_VISION_SUBSCRIPTION_KEY environment variable.\n**Restart your shell or IDE for changes to take effect.**")
# sys.exit()
# if 'COMPUTER_VISION_ENDPOINT' in os.environ:
# endpoint = os.environ['COMPUTER_VISION_ENDPOINT']
thumbnail_url = "https://computervision-gzf.cognitiveservices.azure.com/" + "vision/v2.1/generateThumbnail"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://wx1.sinaimg.cn/mw690/007bZFh6gy1gbkq4or2oqj31i614mgyl.jpg"
headers = {'Ocp-Apim-Subscription-Key': "dd748cf10bf9404399e5416d9399e218"}
params = {'width': '100', 'height': '100', 'smartCropping': 'true'}
data = {'url': image_url}
response = requests.post(thumbnail_url, headers=headers,
params=params, json=data)
response.raise_for_status()
thumbnail = Image.open(BytesIO(response.content))
# Display the thumbnail.
plt.imshow(thumbnail)
plt.axis("off")
# Verify the thumbnail size.
print("Thumbnail is {0}-by-{1}".format(*thumbnail.size))
```
- 返回结果
```
Thumbnail is 100-by-100
```

### 4. 提取印刷体文本和手写文本
```python
# 4.提取手写文本
import json
import os
import sys
import requests
import time
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from PIL import Image
from io import BytesIO
text_recognition_url = endpoint + "/vision/v3.0/read/analyze"
# Set image_url to the URL of an image that you want to recognize.
image_url = "https://img.juzibashi.com/uploadfile/zspics/2019092905132979.jpg"
headers = {'Ocp-Apim-Subscription-Key': "15f98ea70f74425382a7f6b768cd9915"}
data = {'url': image_url}
response = requests.post(text_recognition_url, headers=headers, json=data)
response.raise_for_status()
# Extracting text requires two API calls: One call to submit the
# image for processing, the other to retrieve the text found in the image.
# Holds the URI used to retrieve the recognized text.
operation_url = response.headers["Operation-Location"]
# The recognized text isn't immediately available, so poll to wait for completion.
analysis = {}
poll = True
while (poll):
response_final = requests.get(
response.headers["Operation-Location"], headers=headers)
analysis = response_final.json()
print(json.dumps(analysis, indent=4))
time.sleep(1)
if ("analyzeResult" in analysis):
poll = False
if ("status" in analysis and analysis['status'] == 'failed'):
poll = False
polygons = []
if ("analyzeResult" in analysis):
# Extract the recognized text, with bounding boxes.
polygons = [(line["boundingBox"], line["text"])
for line in analysis["analyzeResult"]["readResults"][0]["lines"]]
# Display the image and overlay it with the extracted text.
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image)
for polygon in polygons:
vertices = [(polygon[0][i], polygon[0][i+1])
for i in range(0, len(polygon[0]), 2)]
text = polygon[1]
patch = Polygon(vertices, closed=True, fill=False, linewidth=2, color='y')
ax.axes.add_patch(patch)
plt.text(vertices[0][0], vertices[0][1], text, fontsize=20, va="top")
plt.show()
```
- 返回结果
```python
{
"status": "running",
"createdDateTime": "2020-10-23T14:39:17Z",
"lastUpdatedDateTime": "2020-10-23T14:39:17Z"
}
{
"status": "succeeded",
"createdDateTime": "2020-10-23T14:39:17Z",
"lastUpdatedDateTime": "2020-10-23T14:39:18Z",
"analyzeResult": {
"version": "3.0.0",
"readResults": [
{
"page": 1,
"angle": -0.0158,
"width": 578,
"height": 235,
"unit": "pixel",
"lines": [
{
"boundingBox": [
1,
7,
545,
5,
545,
32,
1,
35
],
"text": "7, Excuse me, Amy. Can I use your pencil? No problem.",
"words": [
{
"boundingBox": [
2,
8,
15,
8,
16,
34,
3,
34
],
"text": "7,",
"confidence": 0.652
},
{
"boundingBox": [
20,
8,
89,
8,
90,
34,
21,
34
],
"text": "Excuse",
"confidence": 0.942
},
{
"boundingBox": [
94,
8,
128,
8,
128,
35,
95,
34
],
"text": "me,",
"confidence": 0.978
},
{
"boundingBox": [
133,
8,
187,
8,
187,
35,
134,
35
],
"text": "Amy.",
"confidence": 0.816
},
{
"boundingBox": [
192,
8,
227,
7,
227,
35,
192,
35
],
"text": "Can",
"confidence": 0.945
},
{
"boundingBox": [
232,
7,
252,
7,
253,
35,
233,
35
],
"text": "I",
"confidence": 0.979
},
{
"boundingBox": [
257,
7,
293,
7,
293,
34,
258,
34
],
"text": "use",
"confidence": 0.981
},
{
"boundingBox": [
298,
7,
343,
7,
343,
34,
298,
34
],
"text": "your",
"confidence": 0.977
},
{
"boundingBox": [
348,
7,
427,
6,
427,
33,
348,
34
],
"text": "pencil?",
"confidence": 0.694
},
{
"boundingBox": [
442,
6,
467,
6,
467,
33,
442,
33
],
"text": "No",
"confidence": 0.981
},
{
"boundingBox": [
472,
6,
545,
5,
545,
32,
472,
33
],
"text": "problem.",
"confidence": 0.761
}
]
},
{
"boundingBox": [
13,
66,
516,
63,
516,
90,
13,
93
],
"text": "Where is my pencil ? Look, it's here , under your book.",
"words": [
{
"boundingBox": [
22,
67,
66,
67,
66,
93,
22,
93
],
"text": "Where",
"confidence": 0.981
},
{
"boundingBox": [
71,
67,
99,
67,
100,
93,
71,
93
],
"text": "is",
"confidence": 0.98
},
{
"boundingBox": [
105,
67,
132,
67,
132,
93,
105,
93
],
"text": "my",
"confidence": 0.981
},
{
"boundingBox": [
137,
66,
179,
66,
179,
93,
137,
93
],
"text": "pencil",
"confidence": 0.974
},
{
"boundingBox": [
184,
66,
201,
66,
201,
93,
184,
93
],
"text": "?",
"confidence": 0.985
},
{
"boundingBox": [
206,
66,
257,
66,
257,
92,
206,
93
],
"text": "Look,",
"confidence": 0.924
},
{
"boundingBox": [
262,
66,
297,
65,
297,
92,
262,
92
],
"text": "it's",
"confidence": 0.983
},
{
"boundingBox": [
302,
65,
331,
65,
331,
92,
302,
92
],
"text": "here",
"confidence": 0.981
},
{
"boundingBox": [
336,
65,
360,
65,
360,
92,
336,
92
],
"text": ",",
"confidence": 0.962
},
{
"boundingBox": [
365,
65,
422,
64,
422,
92,
365,
92
],
"text": "under",
"confidence": 0.981
},
{
"boundingBox": [
427,
64,
470,
64,
470,
92,
427,
92
],
"text": "your",
"confidence": 0.985
},
{
"boundingBox": [
475,
64,
517,
64,
517,
91,
475,
92
],
"text": "book.",
"confidence": 0.873
}
]
},
{
"boundingBox": [
31,
101,
126,
101,
126,
120,
31,
120
],
"text": "65 40 %?",
"words": [
{
"boundingBox": [
47,
102,
61,
102,
61,
120,
47,
120
],
"text": "65",
"confidence": 0.559
},
{
"boundingBox": [
65,
102,
91,
102,
90,
120,
65,
120
],
"text": "40",
"confidence": 0.559
},
{
"boundingBox": [
107,
102,
126,
103,
126,
120,
107,
120
],
"text": "%?",
"confidence": 0.954
}
]
},
{
"boundingBox": [
27,
121,
315,
120,
315,
148,
27,
149
],
"text": "Here you are ! Thank you !",
"words": [
{
"boundingBox": [
35,
122,
84,
122,
84,
150,
35,
149
],
"text": "Here",
"confidence": 0.981
},
{
"boundingBox": [
90,
122,
128,
122,
128,
150,
89,
150
],
"text": "you",
"confidence": 0.986
},
{
"boundingBox": [
133,
122,
155,
122,
155,
150,
133,
150
],
"text": "are",
"confidence": 0.981
},
{
"boundingBox": [
161,
122,
190,
122,
190,
150,
161,
150
],
"text": "!",
"confidence": 0.985
},
{
"boundingBox": [
197,
122,
259,
122,
260,
149,
197,
150
],
"text": "Thank",
"confidence": 0.979
},
{
"boundingBox": [
265,
122,
290,
121,
291,
149,
265,
149
],
"text": "you",
"confidence": 0.985
},
{
"boundingBox": [
296,
121,
314,
121,
314,
149,
296,
149
],
"text": "!",
"confidence": 0.981
}
]
},
{
"boundingBox": [
221,
157,
264,
158,
263,
173,
221,
174
],
"text": "iftiff !",
"words": [
{
"boundingBox": [
221,
157,
250,
157,
250,
174,
221,
174
],
"text": "iftiff",
"confidence": 0.559
},
{
"boundingBox": [
254,
157,
263,
157,
263,
174,
254,
174
],
"text": "!",
"confidence": 0.986
}
]
},
{
"boundingBox": [
2,
187,
13,
187,
12,
203,
2,
203
],
"text": "8",
"words": [
{
"boundingBox": [
2,
187,
13,
187,
13,
203,
2,
203
],
"text": "8",
"confidence": 0.985
}
]
},
{
"boundingBox": [
14,
179,
536,
173,
536,
200,
14,
206
],
"text": "Where is my bag? In the desk? Not On the desk ? Yest",
"words": [
{
"boundingBox": [
27,
179,
73,
178,
73,
206,
27,
206
],
"text": "Where",
"confidence": 0.983
},
{
"boundingBox": [
79,
178,
107,
178,
107,
205,
79,
206
],
"text": "is",
"confidence": 0.985
},
{
"boundingBox": [
112,
178,
137,
178,
137,
205,
112,
205
],
"text": "my",
"confidence": 0.954
},
{
"boundingBox": [
143,
178,
189,
177,
189,
205,
143,
205
],
"text": "bag?",
"confidence": 0.745
},
{
"boundingBox": [
194,
177,
216,
177,
216,
204,
194,
205
],
"text": "In",
"confidence": 0.986
},
{
"boundingBox": [
221,
177,
253,
176,
253,
204,
221,
204
],
"text": "the",
"confidence": 0.981
},
{
"boundingBox": [
258,
176,
315,
176,
315,
203,
258,
204
],
"text": "desk?",
"confidence": 0.901
},
{
"boundingBox": [
320,
176,
358,
175,
358,
203,
320,
203
],
"text": "Not",
"confidence": 0.977
},
{
"boundingBox": [
363,
175,
388,
175,
388,
202,
363,
203
],
"text": "On",
"confidence": 0.982
},
{
"boundingBox": [
393,
175,
429,
175,
429,
202,
393,
202
],
"text": "the",
"confidence": 0.981
},
{
"boundingBox": [
434,
174,
468,
174,
468,
201,
434,
202
],
"text": "desk",
"confidence": 0.943
},
{
"boundingBox": [
473,
174,
489,
174,
489,
201,
473,
201
],
"text": "?",
"confidence": 0.786
},
{
"boundingBox": [
495,
174,
536,
173,
536,
201,
495,
201
],
"text": "Yest",
"confidence": 0.875
}
]
},
{
"boundingBox": [
554,
214,
573,
215,
573,
231,
555,
231
],
"text": "11",
"words": [
{
"boundingBox": [
554,
214,
572,
214,
572,
231,
554,
230
],
"text": "11",
"confidence": 0.986
}
]
}
]
}
]
}
}
```
