tags | model-index | language | metrics | pipeline_tag | |||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
| object-detection |
The web-form-Detect model is a yolov8 object detection model trained to detect and locate ui form fields in images. It is built upon the ultralytics library and fine-tuned using a dataset of annotated ui form images.
The model is intended to be used for detecting details like Name,number,email,password,button,redio bullet and so on fields in images. It can be incorporated into applications that require automated detection ui form fields from images.
The model has been evaluated on a held-out test dataset and achieved the following performance metrics:
Average Precision (AP): 0.51 Precision: 0.80 Recall: 0.70 F1 Score: 0.71 Please note that the actual performance may vary based on the input data distribution and quality.
To get started with the YOLOv8s object Detection model use for web ui detection, follow these steps:
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('foduucom/web-form-ui-field-detection')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = '/path/to/your/document/images'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
The model was trained on a diverse dataset containing images of web ui form data from different sources, resolutions, and lighting conditions. The dataset was annotated with bounding box coordinates to indicate the location of the ui form fields within the image.
Total Number of Images: 600 Annotation Format: Bounding box coordinates (xmin, ymin, xmax, ymax)
The model's performance is subject to variations in image quality, lighting conditions, and image resolutions. The model may struggle with detecting web ui form in cases of extreme occlusion. The model may not generalize well to non-standard ui form formats or variations.
The model was trained and fine-tuned using a Jupyter Notebook environment.
For inquiries and contributions, please contact us at info@foduu.com
@ModelCard{
author = {Nehul Agrawal and
Rahul parihar},
title = {YOLOv8s web-form ui fields detection},
year = {2023}
}
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。