# API 作业:Azure
**Repository Path**: ZhuDilun/api-homework-1
## Basic Information
- **Project Name**: API 作业:Azure
- **Description**: No description available
- **Primary Language**: Python
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2020-10-22
- **Last Updated**: 2021-02-04
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# API 作业-1
* [一、人脸识别](#face)
* [二、计算机视觉](#vision)
* [三、学习心得](#learn)
---
一、人脸识别
### Azure
参考:[Azure 人脸识别文档](https://docs.microsoft.com/zh-cn/rest/api/cognitiveservices/face/facelist/create)
- Face Detect
```python
# 导入需要的模块
import requests
import json
# 输入密钥
KEY = '0e06f12a31564df180fd4183d0426ede'
# 目标URL
BASE_URL = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'
# 请求头
HEADERS = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '{}'.format(KEY),
}
# 人脸相片地址
img_url = 'https://www.apple.com.cn/newsroom/images/live-action/keynote-september-2020/apple_apple-event-keynote_tim_09152020_big.jpg.large_2x.jpg'
data = {
'url': '{}'.format(img_url),
}
# 选择需要的人脸识别功能(根据API文档)
payload = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'flase',
'returnFaceAttributes': '{}'.format('age,gender,glasses,emotion'),
}
# 发送请求
r = requests.post(BASE_URL,data=json.dumps(data),params = payload,headers=HEADERS)
r.status_code
# 200
r.content
# b'[{"faceId":"6d9d597e-fac5-4f29-87a0-68060978df50","faceRectangle":{"top":121,"left":571,"width":146,"height":146},"faceAttributes":{"gender":"male","age":60.0,"glasses":"ReadingGlasses","emotion":{"anger":0.003,"contempt":0.002,"disgust":0.002,"fear":0.0,"happiness":0.005,"neutral":0.986,"sadness":0.001,"surprise":0.0}}}]'
# JSON转义
results = r.json()
results
# [{'faceId': '6d9d597e-fac5-4f29-87a0-68060978df50',
# 'faceRectangle': {'top': 121, 'left': 571, 'width': 146, 'height': 146},
# 'faceAttributes': {'gender': 'male',
# 'age': 60.0,
# 'glasses': 'ReadingGlasses',
# 'emotion': {'anger': 0.003,
# 'contempt': 0.002,
# 'disgust': 0.002,
# 'fear': 0.0,
# 'happiness': 0.005,
# 'neutral': 0.986,
# 'sadness': 0.001,
# 'surprise': 0.0}}}]
# 用Pandas简化数据
import pandas as pd
df = pd.json_normalize(results)
df
```
|faceId|faceRectangle.top|faceRectangle.left|faceRectangle.width|faceRectangle.height|faceAttributes.gender|faceAttributes.age|faceAttributes.glasses|faceAttributes.emotion.anger|faceAttributes.emotion.contempt|faceAttributes.emotion.disgust|faceAttributes.emotion.fear|faceAttributes.emotion.happiness|faceAttributes.emotion.neutral|faceAttributes.emotion.sadness|faceAttributes.emotion.surprise|
|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
|6d9d597e-fac5-4f29-87a0-68060978df50|121|571|146|146|male|60.0|ReadingGlasses|0.003|0.002|0.002|0.0|0.005|0.986|0.001|0.0|
- FaceList & Find Similar
```python
import requests
# 1.创建人脸列表
faceListId = "list_004" # 设置人脸列表ID
create_facelists_url ="https://api-hjq.cognitiveservices.azure.com/face/v1.0/facelists/{}" # URL(见API文档)
subscription_key ="f212f71663134b73bdec4a5a9aece2f0" # 密钥
# 请求头
headers = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
# 相关信息及识别模型的选用
data = {
"name": "sample_list",
"userData": "相册",
"recognitionModel": "recognition_03"
}
# 发送请求
r_create = requests.put(create_facelists_url.format(faceListId), headers=headers, json=data)
r_create.content
# b''
# 创建成功
# 2.添加人脸
add_face_url ="https://api-hjq.cognitiveservices.azure.com/face/v1.0/facelists/{}/persistedfaces"
assert subscription_key
headers = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
# 人脸相片地址
img_url ="https://www.apple.com.cn/newsroom/images/live-action/keynote-september-2020/apple_apple-event-keynote_tim_09152020_big.jpg.large_2x.jpg"
# 人脸数据
params_add_face={
"userData":"Tim Cook"
}
# 发送请求
r_add_face = requests.post(add_face_url.format(faceListId),headers=headers,params=params_add_face,json={"url":img_url})
r_add_face.status_code
# 200
r_add_face.json() # 返回persistedFaceId
# {'persistedFaceId': '04152fc3-7ded-459f-8f93-35db58f2a41a'}
# 封装成函数方便添加图片
def AddFace(img_url=str,userData=str):
add_face_url ="https://api-hjq.cognitiveservices.azure.com/face/v1.0/facelists/{}/persistedFaces"
assert subscription_key
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
img_url = img_url
params_add_face={
"userData":userData
}
r_add_face = requests.post(add_face_url.format(faceListId),headers=headers,params=params_add_face,json={"url":img_url})
return r_add_face.status_code #返回出状态码
AddFace("http://huangjieqi.gitee.io/picture_storage/Autumnhui.jpg","丘天惠")
AddFace("http://huangjieqi.gitee.io/picture_storage/L-Tony-info.jpg","林嘉茵")
AddFace("http://huangjieqi.gitee.io/picture_storage/TLINGP.jpg","汤玲萍")
AddFace("http://huangjieqi.gitee.io/picture_storage/WenYanZeng.jpg","曾雯燕")
AddFace("http://huangjieqi.gitee.io/picture_storage/XIEIC.jpg","谢依希")
AddFace("http://huangjieqi.gitee.io/picture_storage/YuecongYang.png","杨悦聪")
AddFace("http://huangjieqi.gitee.io/picture_storage/Zoezhouyu.jpg","周雨")
AddFace("http://huangjieqi.gitee.io/picture_storage/crayon-heimi.jpg","刘瑜鹏")
AddFace("http://huangjieqi.gitee.io/picture_storage/jiayichen.jpg","陈嘉仪")
AddFace("http://huangjieqi.gitee.io/picture_storage/kg2000.jpg","徐旖芊")
AddFace("http://huangjieqi.gitee.io/picture_storage/liuxinrujiayou.jpg","刘心如")
AddFace("http://huangjieqi.gitee.io/picture_storage/liuyu19.png","刘宇")
AddFace("http://huangjieqi.gitee.io/picture_storage/ltco.jpg","李婷")
AddFace("http://huangjieqi.gitee.io/picture_storage/lucaszy.jpg","黄智毅")
AddFace("http://huangjieqi.gitee.io/picture_storage/pingzi0211.jpg","黄慧文")
AddFace("http://huangjieqi.gitee.io/picture_storage/shmimy-cn.jpg","张铭睿")
AddFace("http://huangjieqi.gitee.io/picture_storage/yichenting.jpg","陈婷")
AddFace("http://huangjieqi.gitee.io/picture_storage/coco022.jpg","洪可凡")
AddFace("http://huangjieqi.gitee.io/picture_storage/lujizhi.png","卢继志")
AddFace("http://huangjieqi.gitee.io/picture_storage/zzlhyy.jpg","张梓乐")
# 3.检测人脸
face_api_url = 'https://api-hjq.cognitiveservices.azure.com/face/v1.0/detect'
image_url = 'https://www.apple.com.cn/newsroom/images/live-action/keynote-september-2020/apple_apple-event-keynote_tim_09152020_big.jpg.large_2x.jpg'
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
# 请求参数
params = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
# 选择模型
'recognitionModel':'recognition_03',#此参数需与facelist参数一致
'detectionModel':'detection_01',
# 可选参数,请仔细阅读API文档
'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
}
response = requests.post(face_api_url, params=params,headers=headers, json={"url": image_url})
response.json()
# 4.返回人脸相似置信度
findsimilars_url = "https://api-hjq.cognitiveservices.azure.com/face/v1.0/findsimilars"
# 请求正文 faceID需要先检测一张照片获取
data_findsimilars = {
"faceId":"036c5db1-83d5-4d21-8917-d33294080b39", #取上方的faceID
"faceListId": "list_003",
"maxNumOfCandidatesReturned": 10,
"mode": "matchFace" #matchPerson #一种为验证模式,一种为相似值模式
}
r_findsimilars = requests.post(findsimilars_url,headers=headers,json=data_findsimilars)
r_findsimilars.json()
# 查看列表
get_facelist_url = "https://api-hjq.cognitiveservices.azure.com/face/v1.0/facelists/{}"
r_get_facelist = requests.get(get_facelist_url.format(faceListId),headers=headers)
r_get_facelist.json()
# 用Pandas简化数据
import pandas as pd
# 返回facelist数据
adf = pd.json_normalize(r_get_facelist.json()["persistedFaces"])
adf
# 返回相似度数据
bdf = pd.json_normalize(r_findsimilars.json())# 升级pandas才能运行
bdf
#合并
pd.merge(adf, bdf,how='inner', on='persistedFaceId').sort_values(by="confidence",ascending = False)
# 5.删除人脸/人脸列表
faceListId = "list_004" # 需要删除的人脸列表
# 删除列表内人脸
delete_face_url = "https://api-hjq.cognitiveservices.azure.com/face/v1.0/facelists/{}/persistedfaces/{}"
assert subscription_key
# 获取上面的persistedFaceId
persistedFaceId = r_add_face.json()["persistedFaceId"]
headers = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
# 注意requests请求为delete
r_delete_face = requests.delete(delete_face_url.format(faceListId,persistedFaceId),headers=headers)
# 删除人脸列表
delete_facelist_url = "https://api-hjq.cognitiveservices.azure.com/face/v1.0/facelists/{}"
assert subscription_key
headers = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': subscription_key,
}
r_delete_facelist = requests.delete(delete_facelist_url.format(faceListId),headers=headers)
```
### Face++
参考:[Face++ 人脸识别文档](https://console.faceplusplus.com.cn/documents/4888391)
- Face Detect
```python
# 1.先导入需要的模块
import requests
# 2.输入我们api_secret、api_key
api_secret = "Vr0PtRCw-ZFwYXTKVfi6aDNaSqlfunK3"
api_key = 'jRpCaZ34kqNYJ8Zppdc-yGGum_YkETov'
# 3.目标url
# 这里也可以使用本地图片 例如:filepath ="image/tupian.jpg"
BASE_URL = 'https://api-cn.faceplusplus.com/facepp/v3/detect'
img_url = 'https://www.apple.com.cn/newsroom/images/live-action/keynote-september-2020/apple_apple-event-keynote_tim_09152020_big.jpg.large_2x.jpg'
# 4.沿用API文档的示范代码,准备我们的headers和图片(数据)
headers = {
'Content-Type': 'application/json',
}
# 5.准备后面的数据
payload = {
"image_url":img_url,
'api_key': api_key,
'api_secret': api_secret,
# 是否检测并返回根据人脸特征判断出的年龄、性别、情绪等属性。
'return_attributes':'gender,age,smiling,emotion',
}
# 6.requests发送我们请求
r = requests.post(BASE_URL, params=payload, headers=headers)
r.status_code
# 200
r.content
# b'{"request_id":"1603021269,729c11ef-2417-4da5-b9a6-14fef225a010","time_used":1194,"faces":[{"face_token":"1aff0090208b37e2f8de321a8e2e96c7","face_rectangle":{"top":139,"left":583,"width":140,"height":140},"attributes":{"gender":{"value":"Male"},"age":{"value":66},"smile":{"value":28.454,"threshold":50.000},"emotion":{"anger":0.040,"disgust":37.393,"fear":0.040,"happiness":24.038,"neutral":37.699,"sadness":0.735,"surprise":0.054}}}],"image_id":"AfGvmVi3qesqsPIfULDZkw==","face_num":1}\n'
# requests 巧妙的方法 r = response
results = r.json()
results
# {'request_id': '1603021269,729c11ef-2417-4da5-b9a6-14fef225a010',
# 'time_used': 1194,
# 'faces': [{'face_token': '1aff0090208b37e2f8de321a8e2e96c7',
# 'face_rectangle': {'top': 139, 'left': 583, 'width': 140, 'height': 140},
# 'attributes': {'gender': {'value': 'Male'},
# 'age': {'value': 66},
# 'smile': {'value': 28.454, 'threshold': 50.0},
# 'emotion': {'anger': 0.04,
# 'disgust': 37.393,
# 'fear': 0.04,
# 'happiness': 24.038,
# 'neutral': 37.699,
# 'sadness': 0.735,
# 'surprise': 0.054}}}],
# 'image_id': 'AfGvmVi3qesqsPIfULDZkw==',
# 'face_num': 1}
```
- FaceSet & Compare Face
```python
api_secret = "Vr0PtRCw-ZFwYXTKVfi6aDNaSqlfunK3"
api_key = 'jRpCaZ34kqNYJ8Zppdc-yGGum_YkETov'
# 1.FaceSet Create
import requests,json
display_name = "人脸集合1018" #自定义人脸集合的名字
outer_id = "20201018" #自定义标识
user_data = "test" #自定义用户信息
CreateFace_Url = "https://api-cn.faceplusplus.com/facepp/v3/faceset/create" #调用URL
payload = {
# 请求参数
'api_key': api_key,
'api_secret': api_secret,
'display_name':display_name,
'outer_id':outer_id,
'user_data':user_data
}
r = requests.post(CreateFace_Url, params=payload)
r.json()
# {'faceset_token': '52ed3456d1211f0ec0387f413ef998aa',
# 'time_used': 184,
# 'face_count': 0,
# 'face_added': 0,
# 'request_id': '1603021460,101dc813-10a0-47bd-b46c-48a07231048b',
# 'outer_id': '20201018',
# 'failure_detail': []}
# 2.FaceSet GetDetail(获取人脸集合信息)
GetDetail_Url = "https://api-cn.faceplusplus.com/facepp/v3/faceset/getdetail"
payload = {
'api_key': api_key,
'api_secret': api_secret,
'outer_id':outer_id,
}
r = requests.post(GetDetail_Url,params=payload)
r.json()
# {'faceset_token': 'a1a46eebb6f085052b807ac29364e851',
# 'tags': '',
# 'time_used': 88,
# 'user_data': 'test',
# 'display_name': '人脸集合1018',
# 'face_tokens': [],
# 'face_count': 0,
# 'request_id': '1603021481,3c684d07-df58-4562-a0a2-9db192955f50',
# 'outer_id': '20201018'}
# 3.FaceSet AddFace(增加人脸信息)
AddFace_url = " https://api-cn.faceplusplus.com/facepp/v3/faceset/addface"
payload = {
'api_key': api_key,
'api_secret': api_secret,
'faceset_token':'52ed3456d1211f0ec0387f413ef998aa',
'face_tokens':'1aff0090208b37e2f8de321a8e2e96c7', # qianmiande
}
r = requests.post(AddFace_url,params=payload)
r.json()
# {'faceset_token': '52ed3456d1211f0ec0387f413ef998aa',
# 'time_used': 593,
# 'face_count': 1,
# 'face_added': 1,
# 'request_id': '1603021568,e204cfa6-4891-482e-9740-396c6e96e05f',
# 'outer_id': '20201018',
# 'failure_detail': []}
# 4.FaceSet RemoveFace(移除人脸信息)
RemoveFace_url = " https://api-cn.faceplusplus.com/facepp/v3/faceset/removeface"
payload = {
'api_key': api_key,
'api_secret': api_secret,
'faceset_token':'52ed3456d1211f0ec0387f413ef998aa',
'face_tokens':'1aff0090208b37e2f8de321a8e2e96c7',
}
r = requests.post(RemoveFace_url,params=payload)
r.json()
# {'faceset_token': '52ed3456d1211f0ec0387f413ef998aa',
# 'face_removed': 1,
# 'time_used': 187,
# 'face_count': 0,
# 'request_id': '1603021618,ac78d6eb-222b-480e-b7b0-7e59d50a547e',
# 'outer_id': '20201018',
# 'failure_detail': []}
# 5.FaceSet Update(更新人脸集合信息)
Update_url = "https://api-cn.faceplusplus.com/facepp/v3/faceset/update"
payload = {
'api_key': api_key,
'api_secret': api_secret,
'faceset_token':'52ed3456d1211f0ec0387f413ef998aa',
'user_data':"Test.",
}
r = requests.post(Update_url,params=payload)
r.json()
# {'faceset_token': '52ed3456d1211f0ec0387f413ef998aa',
# 'request_id': '1603021669,188edf69-945f-4026-a5dd-8b2d87392337',
# 'time_used': 77,
# 'outer_id': '20201018'}
# 6.Compare Face(对比人脸相似度)
liudehua01 = "https://gss0.baidu.com/9fo3dSag_xI4khGko9WTAnF6hhy/zhidao/pic/item/7c1ed21b0ef41bd57f7f20ff57da81cb39db3d89.jpg"
liudehua02 = "https://tse3-mm.cn.bing.net/th/id/OIP.Xz3HbYZeNrdUnGJ7vXNzsQHaKO?pid=Api&rs=1"
wangzulan = "https://tse3-mm.cn.bing.net/th/id/OIP.ZnXeGoVYT4jQudiPOGZn3QAAAA?pid=Api&rs=1"
Compare_url = "https://api-cn.faceplusplus.com/facepp/v3/compare"
payload ={
'api_key': api_key,
'api_secret': api_secret,
'image_url1':liudehua01,
'image_url2':wangzulan
}
r = requests.post(Compare_url,params=payload)
r.json()
# {'faces1': [{'face_rectangle': {'width': 824,
# 'top': 871,
# 'left': 1114,
# 'height': 824},
# 'face_token': '26cf8ddf1370a7131772b47f389d7e48'}],
# 'faces2': [{'face_rectangle': {'width': 86,
# 'top': 91,
# 'left': 65,
# 'height': 86},
# 'face_token': 'd74261adfc3a844832b36940470b1b8f'}],
# 'time_used': 2147,
# 'thresholds': {'1e-3': 62.327, '1e-5': 73.975, '1e-4': 69.101},
# 'confidence': 26.085,
# 'image_id2': 'g6kg8zfyOouG6ftP+GvEfg==',
# 'image_id1': 'KIOXEC2V/MyL4zuopAcNig==',
# 'request_id': '1603021704,6b7e272c-b979-4597-9db6-8d98e97a2371'}
```
### 百度智能云
参考:[百度智能云人脸识别文档](https://ai.baidu.com/ai-doc/FACE/yk37c1u4t)
```python
import requests
# client_id 为官网获取的API Key, client_secret 为官网获取的Secret Key
host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id={}&client_secret={}'
client_id = "DlnpohsOvQe6j7q2yaXi4k5n"
client_secret = "wQm2WIij5jy7uAl6rxbHeScmIt8kS5ma"
response = requests.get(host.format(client_id, client_secret))
if response:
print(response.json())
# {'refresh_token': '25.28e474eb76b22b468b28bd214b4c5f5e.315360000.1918366387.282335-22838595',
# 'expires_in': 2592000,
# 'session_key': '9mzdDcZhxI0DqtPikwQHIKEoKgdnFBrsLrnaN0mbUqjzpTf/BQLtOGJhjMAbUmQN3fXWr8Bp4hkjIbggmsTDGBTb0MAaIQ==',
# 'access_token': '24.60f9411a4b4ea9f169195953c398a3e0.2592000.1605598387.282335-22838595',
# 'scope': 'public brain_all_scope vis-faceverify_faceverify_h5-face-liveness vis-faceverify_FACE_V3 vis-faceverify_idl_face_merge vis-faceverify_FACE_EFFECT wise_adapt lebo_resource_base lightservice_public hetu_basic lightcms_map_poi kaidian_kaidian ApsMisTest_Test权限 vis-classify_flower lpq_开放 cop_helloScope ApsMis_fangdi_permission smartapp_snsapi_base smartapp_mapp_dev_manage iop_autocar oauth_tp_app smartapp_smart_game_openapi oauth_sessionkey smartapp_swanid_verify smartapp_opensource_openapi smartapp_opensource_recapi fake_face_detect_开放Scope vis-ocr_虚拟人物助理 idl-video_虚拟人物助理 smartapp_component',
# 'session_secret': '3467d4c7825a546093e81c2d4249db92'}
# 1.人脸检测与属性分析
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/detect"
params = "{\"image\":\"https://ss2.bdstatic.com/70cFvnSh_Q1YnxGkpoWK1HF6hhy/it/u=3430692674,459091344&fm=26&gp=0.jpg\",\"image_type\":\"URL\",\"face_field\":\"faceshape,facetype\"}"
# image_type 图片类型:BASE64;URL;FACE_TOKEN。
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
# {'error_code': 0,
# 'error_msg': 'SUCCESS',
# 'log_id': 4565101254575,
# 'timestamp': 1603006446,
# 'cached': 0,
# 'result': {'face_num': 1,
# 'face_list': [{'face_token': '91cf0a5aa45b0371989e56760b30548c',
# 'location': {'left': 186.93,
# 'top': 99.21,
# 'width': 121,
# 'height': 118,
# 'rotation': 1},
# 'face_probability': 1,
# 'angle': {'yaw': -1.85, 'pitch': 4.67, 'roll': 0.97},
# 'face_shape': {'type': 'oval', 'probability': 0.66},
# 'face_type': {'type': 'human', 'probability': 1}}]}}
# 2.人脸对比
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/match"
params = "[{\"image\": \"https://ss2.bdstatic.com/70cFvnSh_Q1YnxGkpoWK1HF6hhy/it/u=3430692674,459091344&fm=26&gp=0.jpg\", \"image_type\": \"URL\", \"face_type\": \"CERT\", \"quality_control\": \"LOW\"}, {\"image\": \"https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1603015508446&di=eee4e2c852d804bc3b80a719df3df9ef&imgtype=0&src=http%3A%2F%2Fimg2-cloud.itouchtv.cn%2Ftvtouchtv%2Fimage%2F20170914%2F1505363583630378.jpg\", \"image_type\": \"URL\", \"face_type\": \"LIVE\", \"quality_control\": \"LOW\"}]"
# face_type 人脸的类型 LIVE;IDCARD;WATERMARK;CERT;INFRARED。
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
# {'error_code': 0,
# 'error_msg': 'SUCCESS',
# 'log_id': 7999101259975,
# 'timestamp': 1603006489,
# 'cached': 0,
# 'result': {'score': 94.37832642,
# 'face_list': [{'face_token': '91cf0a5aa45b0371989e56760b30548c'},
# {'face_token': '93d2501e1cb1685c26107d0b19cc854a'}]}}
# 3.创建用户组
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/faceset/group/add"
params = "{\"group_id\":\"group2\"}"
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
# {'error_code': 0,
# 'error_msg': 'SUCCESS',
# 'log_id': 2594655589842,
# 'timestamp': 1603006614,
# 'cached': 0,
# 'result': None}
# 4.人脸注册
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/faceset/user/add"
params = "{\"image\":\"91cf0a5aa45b0371989e56760b30548c\",\"image_type\":\"FACE_TOKEN\",\"group_id\":\"group1\",\"user_id\":\"user1\",\"user_info\":\"abc\",\"quality_control\":\"LOW\",\"liveness_control\":\"NORMAL\"}"
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
# {'error_code': 0,
# 'error_msg': 'SUCCESS',
# 'log_id': 9420194001790,
# 'timestamp': 1603006639,
# 'cached': 0,
# 'result': {'face_token': '91cf0a5aa45b0371989e56760b30548c',
# 'location': {'left': 186.93,
# 'top': 99.21,
# 'width': 121,
# 'height': 118,
# 'rotation': 1}}}
# 5.获取用户人脸列表
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/faceset/face/getlist"
params = "{\"user_id\":\"user1\",\"group_id\":\"group2\"}"
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
# {'error_code': 0,
# 'error_msg': 'SUCCESS',
# 'log_id': 500105794579,
# 'timestamp': 1603006672,
# 'cached': 0,
# 'result': {'face_list': [{'face_token': '91cf0a5aa45b0371989e56760b30548c',
# 'ctime': '2020-10-18 15:37:20'}]}}
# 6.删除用户组
request_url = "https://aip.baidubce.com/rest/2.0/face/v3/faceset/group/delete"
params = "{\"group_id\":\"group1\"}"
access_token = '24.6d684a22feb8384590448aa0f4edcb8e.2592000.1605596154.282335-22838595' # 调用鉴权接口获取的token
request_url = request_url + "?access_token=" + access_token
headers = {'content-type': 'application/json'}
response = requests.post(request_url, data=params, headers=headers)
response.json()
# {'error_code': 0,
# 'error_msg': 'SUCCESS',
# 'log_id': 3589996510179,
# 'timestamp': 1603006728,
# 'cached': 0,
# 'result': None}
```
---
二、计算机视觉
参考:[Azure 计算机视觉文档](https://docs.microsoft.com/zh-cn/azure/cognitive-services/computer-vision/)
### 1. 分析远程图像
```python
import requests
%matplotlib inline
import matplotlib.pyplot as plt
import json
from PIL import Image
from io import BytesIO
endpoint = "https://dilun002.cognitiveservices.azure.com/"
subscription_key = "ec2c3e5af3b84652a898e9d94c88a113"
# base url
analyze_url = endpoint+ "vision/v3.1/analyze"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://images.unsplash.com/photo-1575556777856-a2b2d8cd7737?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=1001&q=80"
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
# 参数
params = {'visualFeatures': 'Categories,Description,Color'}
# 请求主体body
data = {'url': image_url}
response = requests.post(analyze_url, headers=headers,params=params, json=data)
response.raise_for_status()
# The 'analysis' object contains various fields that describe the image. The most
# relevant caption for the image is obtained from the 'description' property.
analysis = response.json()
print(json.dumps(response.json()))
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
# Display the image and overlay it with the caption.
image = Image.open(BytesIO(requests.get(image_url).content))
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
plt.show()
```
{"categories": [{"name": "building_", "score": 0.2578125, "detail": {"landmarks": []}}, {"name": "building_church", "score": 0.65234375, "detail": {"landmarks": []}}], "color": {"dominantColorForeground": "White", "dominantColorBackground": "Black", "dominantColors": ["Black", "White"], "accentColor": "8A5D41", "isBwImg": false, "isBWImg": false}, "description": {"tags": ["outdoor", "road", "building", "dome", "old"], "captions": [{"text": "a person walking in front of a large building", "confidence": 0.45579203963279724}]}, "requestId": "739206bc-5e85-4e85-97c4-8072fa4195d1", "metadata": {"height": 1251, "width": 1001, "format": "Jpeg"}}

### 2. 分析本地图片
```python
import os
import sys
import requests
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
# Set image_path to the local path of an image that you want to analyze.
image_path = "/Users/apple/Downloads/barthelemy-de-mazenod-hTkVTHsFJ-U-unsplash.jpg"
# Read the image into a byte array
image_data = open(image_path, "rb").read()
headers = {'Ocp-Apim-Subscription-Key': "ec2c3e5af3b84652a898e9d94c88a113",
'Content-Type': 'application/octet-stream'}
params = {'visualFeatures': 'Categories,Description,Color'}
response = requests.post(
analyze_url, headers=headers, params=params, data=image_data)
response.raise_for_status()
# The 'analysis' object contains various fields that describe the image. The most
# relevant caption for the image is obtained from the 'description' property.
analysis = response.json()
print(analysis)
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
# Display the image and overlay it with the caption.
image = Image.open(BytesIO(image_data))
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
```
{'categories': [{'name': 'abstract_', 'score': 0.00390625}, {'name': 'outdoor_', 'score': 0.03125, 'detail': {'landmarks': []}}], 'color': {'dominantColorForeground': 'White', 'dominantColorBackground': 'White', 'dominantColors': ['White', 'Black'], 'accentColor': 'C03205', 'isBwImg': False, 'isBWImg': False}, 'description': {'tags': ['building', 'outdoor', 'transport', 'car', 'curb'], 'captions': [{'text': 'a man and woman standing next to a blue car', 'confidence': 0.3949364125728607}]}, 'requestId': 'ba4361f6-512d-4099-b020-8770ada320c1', 'metadata': {'height': 3224, 'width': 2137, 'format': 'Jpeg'}}

### 3. 生成缩略图
```python
import os
import sys
import requests
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
thumbnail_url = "https://dilun002.cognitiveservices.azure.com/" + "vision/v3.1/generateThumbnail"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://images.unsplash.com/photo-1602694898357-3fe49323b809?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=975&q=80"
headers = {'Ocp-Apim-Subscription-Key': "ec2c3e5af3b84652a898e9d94c88a113"}
params = {'width': '100', 'height': '100', 'smartCropping': 'true'}
data = {'url': image_url}
response = requests.post(thumbnail_url, headers=headers,params=params, json=data)
response.raise_for_status()
thumbnail = Image.open(BytesIO(response.content))
# Display the thumbnail.
plt.imshow(thumbnail)
plt.axis("off")
# Verify the thumbnail size.
print("Thumbnail is {0}-by-{1}".format(*thumbnail.size))
```
Thumbnail is 100-by-100

### 4. 提取文本(读取API)
```python
import json
import os
import sys
import requests
import time
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from PIL import Image
from io import BytesIO
text_recognition_url = endpoint + "/vision/v3.0/read/analyze"
# Set image_url to the URL of an image that you want to recognize.
image_url = "https://www.apple.com.cn/v/macos/big-sur-preview/b/images/overview/messages/group_photos_static__fbox06wyksuq_large_2x.jpg"
headers = {'Ocp-Apim-Subscription-Key': "ec2c3e5af3b84652a898e9d94c88a113"}
data = {'url': image_url}
response = requests.post(text_recognition_url, headers=headers, json=data)
response.raise_for_status()
# Extracting text requires two API calls: One call to submit the
# image for processing, the other to retrieve the text found in the image.
# Holds the URI used to retrieve the recognized text.
operation_url = response.headers["Operation-Location"]
# The recognized text isn't immediately available, so poll to wait for completion.
analysis = {}
poll = True
while (poll):
response_final = requests.get(
response.headers["Operation-Location"], headers=headers)
analysis = response_final.json()
print(json.dumps(analysis, indent=4))
time.sleep(1)
if ("analyzeResult" in analysis):
poll = False
if ("status" in analysis and analysis['status'] == 'failed'):
poll = False
polygons = []
if ("analyzeResult" in analysis):
# Extract the recognized text, with bounding boxes.
polygons = [(line["boundingBox"], line["text"])
for line in analysis["analyzeResult"]["readResults"][0]["lines"]]
# Display the image and overlay it with the extracted text.
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image)
for polygon in polygons:
vertices = [(polygon[0][i], polygon[0][i+1])
for i in range(0, len(polygon[0]), 2)]
text = polygon[1]
patch = Polygon(vertices, closed=True, fill=False, linewidth=2, color='y')
ax.axes.add_patch(patch)
plt.text(vertices[0][0], vertices[0][1], text, fontsize=20, va="top")
plt.show()
```
{
"status": "succeeded",
"createdDateTime": "2020-10-18T10:47:13Z",
"lastUpdatedDateTime": "2020-10-18T10:47:13Z",
"analyzeResult": {
"version": "3.0.0",
"readResults": [
{
"page": 1,
"angle": 0.7275,
"width": 776,
"height": 776,
"unit": "pixel",
"lines": [
{
"boundingBox": [
449,
100,
607,
102,
606,
141,
449,
139
],
"text": "Me too",
"words": [
{
"boundingBox": [
450,
101,
522,
102,
522,
141,
450,
140
],
"text": "Me",
"confidence": 0.988
},
{
"boundingBox": [
530,
102,
608,
103,
606,
142,
529,
141
],
"text": "too",
"confidence": 0.987
}
]
},
{
"boundingBox": [
275,
664,
496,
664,
496,
710,
275,
710
],
"text": "Film Club",
"words": [
{
"boundingBox": [
276,
665,
363,
666,
363,
711,
275,
709
],
"text": "Film",
"confidence": 0.986
},
{
"boundingBox": [
381,
666,
494,
665,
495,
711,
380,
711
],
"text": "Club",
"confidence": 0.987
}
]
}
]
}
]
}
}

### 5. 提取文本(OCR API)
```python
import os
import sys
import requests
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from PIL import Image
from io import BytesIO
ocr_url = endpoint + "vision/v3.1/ocr"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://www.apple.com.cn/v/macos/big-sur-preview/b/images/overview/messages/group_photos_static__fbox06wyksuq_large_2x.jpg"
headers = {'Ocp-Apim-Subscription-Key': "ec2c3e5af3b84652a898e9d94c88a113"}
params = {'language': 'unk', 'detectOrientation': 'true'}
data = {'url': image_url}
response = requests.post(ocr_url, headers=headers, params=params, json=data)
response.raise_for_status()
analysis = response.json()
# Extract the word bounding boxes and text.
line_infos = [region["lines"] for region in analysis["regions"]]
word_infos = []
for line in line_infos:
for word_metadata in line:
for word_info in word_metadata["words"]:
word_infos.append(word_info)
word_infos
# Display the image and overlay it with the extracted text.
plt.figure(figsize=(5, 5))
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image, alpha=0.5)
for word in word_infos:
bbox = [int(num) for num in word["boundingBox"].split(",")]
text = word["text"]
origin = (bbox[0], bbox[1])
patch = Rectangle(origin, bbox[2], bbox[3],
fill=False, linewidth=2, color='y')
ax.axes.add_patch(patch)
plt.text(origin[0], origin[1], text, fontsize=20, weight="bold", va="top")
plt.show()
plt.axis("off")
```

(0.0, 1.0, 0.0, 1.0)
### 6. 使用域模型
#### 6.1 地标
```python
import os
import sys
import requests
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
landmark_analyze_url = endpoint + "vision/v3.1/models/landmarks/analyze"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://images.unsplash.com/photo-1587825338028-f1d568e0dbb3?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2689&q=80"
headers = {'Ocp-Apim-Subscription-Key': "ec2c3e5af3b84652a898e9d94c88a113"}
params = {'model': 'landmarks'}
data = {'url': image_url}
response = requests.post(
landmark_analyze_url, headers=headers, params=params, json=data)
response.raise_for_status()
# The 'analysis' object contains various fields that describe the image. The
# most relevant landmark for the image is obtained from the 'result' property.
analysis = response.json()
assert analysis["result"]["landmarks"] is not []
print(analysis)
landmark_name = analysis["result"]["landmarks"][0]["name"].capitalize()
# Display the image and overlay it with the landmark name.
image = Image.open(BytesIO(requests.get(image_url).content))
plt.imshow(image)
plt.axis("off")
_ = plt.title(landmark_name, size="x-large", y=-0.1)
plt.show()
```
{'result': {'landmarks': [{'name': 'Forbidden City', 'confidence': 0.9999575614929199}]}, 'requestId': '3f9c5c3b-d026-4e51-8076-d8177f3ebc99', 'metadata': {'height': 1514, 'width': 2689, 'format': 'Jpeg'}}

#### 6.2 名人
```python
import requests
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
# Replace with your valid subscription key.
subscription_key = "ec2c3e5af3b84652a898e9d94c88a113"
assert subscription_key
vision_base_url = "https://dilun002.cognitiveservices.azure.com/vision/v2.1/"
celebrity_analyze_url = vision_base_url + "models/celebrities/analyze"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://www.apple.com.cn/newsroom/images/live-action/keynote-september-2020/apple_apple-event-keynote_tim_09152020_big.jpg.large_2x.jpg"
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'model': 'celebrities'}
data = {'url': image_url}
response = requests.post(
celebrity_analyze_url, headers=headers, params=params, json=data)
response.raise_for_status()
# The 'analysis' object contains various fields that describe the image. The
# most relevant celebrity for the image is obtained from the 'result' property.
analysis = response.json()
assert analysis["result"]["celebrities"] is not []
print(analysis)
celebrity_name = analysis["result"]["celebrities"][0]["name"].capitalize()
# Display the image and overlay it with the celebrity name.
image = Image.open(BytesIO(requests.get(image_url).content))
plt.imshow(image)
plt.axis("off")
_ = plt.title(celebrity_name, size="x-large", y=-0.1)
plt.show()
```
{'result': {'celebrities': [{'name': 'Tim Cook', 'confidence': 0.997101366519928, 'faceRectangle': {'left': 571, 'top': 120, 'width': 142, 'height': 142}}]}, 'requestId': '9993d3cb-bf79-4e1a-98a6-fd211edae4f4', 'metadata': {'height': 1102, 'width': 1960, 'format': 'Jpeg'}}

---
三、学习心得
就个人而言,API这门课程的难度可以说这是三个学期以来难度最高的。因为这门课涉及了大量的代码,同时也需要我们有读懂对应API文档的能力。这都是我们此前为此接触过的知识,上手有一定难度,在课程前期遇到的问题也会有很多。但在随后的学习中,我们学会了更多Python语法,也跟随着许老师及文档一步步地输入和运行代码,开始慢慢学会看API文档、理解到每一行代码的含义,也懂得根据需求参照文档填写相应的代码。总的来说,多练Python、多读文档、多尝试运行代码,才能把这门课学会。
#### API原理及如何读懂API文档
API(Application Programming Interface,应用程序接口)是一些预先定义的函数,或指软件系统不同组成部分衔接的约定。用来提供应用程序与开发人员基于某软件或硬件得以访问的一组例程,而又无需访问源码,或理解内部工作机制的细节。
在人脸识别API的学习中,我们共尝试了三个平台。每一个平台都会给出其API文档供我们了解其功能及调用方法。但它们都是大同小异的。以Azure为例,文档里包括了:功能概述、Http Method、Request URL、Request parameters、Request headers、Request body以及Response。整个文档及代码都是围绕着Request展开。接下来我们只需要按照把我们获取的(或自定义的)终结点、密钥,人脸相片URL等数据按照示例代码填入对应位置就基本搞定了。在Request parameters中,我们还可以进一步按需选用API文档里面提供的功能。