diff --git a/cv/detection/yolof/pytorch/README.md b/cv/detection/yolof/pytorch/README.md
index 4c1ac485f90ec8b3f9d0be043724fb17747c81f7..3ca449f3861854a07e191cba0485f66e6586ca6d 100755
--- a/cv/detection/yolof/pytorch/README.md
+++ b/cv/detection/yolof/pytorch/README.md
@@ -6,12 +6,11 @@ This paper revisits feature pyramids networks (FPN) for one-stage detectors and
## Step 1: Installing packages
-```
+```bash
pip3 install -r requirements.txt
MMCV_WITH_OPS=1 python3 setup.py build && cp build/lib.linux*/mmcv/_ext.cpython* mmcv
```
-
## Step 2: Preparing datasets
Go to visit [COCO official website](https://cocodataset.org/#download), then select the COCO dataset you want to download.
@@ -46,19 +45,22 @@ ln -s /path/to/coco2017 data/coco
#### Training on a single GPU
-```
+```bash
bash train.sh
```
#### Training on multiple GPUs
-```
+```bash
bash train_dist.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
+
for example,
-```
+
+```bash
bash train_dist.sh configs/yolof/yolof_r50_c5_8x8_1x_coco.py 8
```
## Reference
-https://github.com/open-mmlab/mmdetection
\ No newline at end of file
+
+- [mmdetection](https://github.com/open-mmlab/mmdetection)
\ No newline at end of file
diff --git a/cv/detection/yolov3/pytorch/.gitignore b/cv/detection/yolov3/pytorch/.gitignore
deleted file mode 100644
index 9e17c964ab8775ae3802d55ae776771fa56fa113..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/.gitignore
+++ /dev/null
@@ -1,6 +0,0 @@
-**/__pycache__/
-VOC
-logs
-output
-checkpoints
-data/coco
diff --git a/cv/detection/yolov3/pytorch/LICENSE b/cv/detection/yolov3/pytorch/LICENSE
deleted file mode 100644
index 92b370f0e0e1b91cf8baf5d0f78c56a9824c39f1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/LICENSE
+++ /dev/null
@@ -1,674 +0,0 @@
-GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
diff --git a/cv/detection/yolov3/pytorch/README.md b/cv/detection/yolov3/pytorch/README.md
index 2a081e9af675438f7a6dcb77c1ed7db6acf6d2de..9a7cf4e1876b54301c2145a82ef56e4eaae46d79 100755
--- a/cv/detection/yolov3/pytorch/README.md
+++ b/cv/detection/yolov3/pytorch/README.md
@@ -2,11 +2,14 @@
## Model description
-We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared to 57.5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. As always, all the code is online at https://pjreddie.com/yolo/.
+We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared to 57.5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. As always, all the code is online at .
## Step 1: Installing packages
-```shell
+```bash
+## clone yolov3 and install
+git clone https://gitee.com/deep-spark/deepsparkhub-GPL.git
+cd deepsparkhub-GPL/cv/detection/yolov3/pytorch/
bash setup.sh
```
@@ -35,7 +38,5 @@ bash run_dist_training.sh
```
## Reference
-https://github.com/eriklindernoren/PyTorch-YOLOv3
-
-
+- [YOLOv3](https://github.com/eriklindernoren/PyTorch-YOLOv3)
diff --git a/cv/detection/yolov3/pytorch/common_utils/__init__.py b/cv/detection/yolov3/pytorch/common_utils/__init__.py
deleted file mode 100644
index 32e8c4f57a6ba20a37bb3cfd1e7a5ed59a61f8d4..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/common_utils/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import random
-
-import numpy as np
-
-from .dist import *
-from .metric_logger import *
-from .misc import *
-from .smooth_value import *
-
-def manual_seed(seed, deterministic=False):
- random.seed(seed)
- np.random.seed(seed)
- os.environ['PYTHONHASHSEED'] = str(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
-
- if deterministic:
- torch.backends.cudnn.deterministic = True
- torch.backends.cudnn.benchmark = False
- else:
- torch.backends.cudnn.deterministic = False
- torch.backends.cudnn.benchmark = True
\ No newline at end of file
diff --git a/cv/detection/yolov3/pytorch/common_utils/dist.py b/cv/detection/yolov3/pytorch/common_utils/dist.py
deleted file mode 100644
index ea56ca267755706ab1a62e9d5e93c71b6245c639..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/common_utils/dist.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2022 Iluvatar CoreX. All rights reserved.
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-
-from collections import defaultdict, deque
-import datetime
-import errno
-import os
-import time
-
-import torch
-import torch.distributed as dist
-
-
-
-def setup_for_distributed(is_master):
- """
- This function disables printing when not in master process
- """
- import builtins as __builtin__
- builtin_print = __builtin__.print
-
- def print(*args, **kwargs):
- force = kwargs.pop('force', False)
- if is_master or force:
- builtin_print(*args, **kwargs)
-
- __builtin__.print = print
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
-
-
-def get_world_size():
- if not is_dist_avail_and_initialized():
- return 1
- return dist.get_world_size()
-
-
-def get_rank():
- if not is_dist_avail_and_initialized():
- return 0
- return dist.get_rank()
-
-
-def is_main_process():
- return get_rank() == 0
-
-
-def save_on_master(*args, **kwargs):
- if is_main_process():
- torch.save(*args, **kwargs)
-
-
-def get_dist_backend(args=None):
- DIST_BACKEND_ENV = "PT_DIST_BACKEND"
- if DIST_BACKEND_ENV in os.environ:
- return os.environ[DIST_BACKEND_ENV]
-
- if args is None:
- args = dict()
-
- backend_attr_name = "dist_backend"
-
- if hasattr(args, backend_attr_name):
- return getattr(args, backend_attr_name)
-
- if backend_attr_name in args:
- return args[backend_attr_name]
-
- return "nccl"
-
-
-def init_distributed_mode(args):
- if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
- args.rank = int(os.environ["RANK"])
- args.world_size = int(os.environ['WORLD_SIZE'])
- args.gpu = int(os.environ['LOCAL_RANK'])
- elif 'SLURM_PROCID' in os.environ:
- args.rank = int(os.environ['SLURM_PROCID'])
- args.gpu = args.rank % torch.cuda.device_count()
- else:
- print('Not using distributed mode')
- args.distributed = False
- return
-
- args.distributed = True
-
- torch.cuda.set_device(args.gpu)
- dist_backend = get_dist_backend(args)
- print('| distributed init (rank {}): {}'.format(
- args.rank, args.dist_url), flush=True)
- torch.distributed.init_process_group(backend=dist_backend, init_method=args.dist_url,
- world_size=args.world_size, rank=args.rank)
- torch.distributed.barrier()
- setup_for_distributed(args.rank == 0)
-
-
-def all_gather(data):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors)
- Args:
- data: any picklable object
- Returns:
- list[data]: list of data gathered from each rank
- """
- world_size = get_world_size()
- if world_size == 1:
- return [data]
- data_list = [None] * world_size
- dist.all_gather_object(data_list, data)
- return data_list
-
-
-def reduce_dict(input_dict, average=True):
- """
- Args:
- input_dict (dict): all the values will be reduced
- average (bool): whether to do average or sum
- Reduce the values in the dictionary from all processes so that all processes
- have the averaged results. Returns a dict with the same fields as
- input_dict, after reduction.
- """
- world_size = get_world_size()
- if world_size < 2:
- return input_dict
- with torch.no_grad():
- names = []
- values = []
- # sort the keys so that they are consistent across processes
- for k in sorted(input_dict.keys()):
- names.append(k)
- values.append(input_dict[k])
- values = torch.stack(values, dim=0)
- dist.all_reduce(values)
- if average:
- values /= world_size
- reduced_dict = {k: v for k, v in zip(names, values)}
- return reduced_dict
diff --git a/cv/detection/yolov3/pytorch/common_utils/metric_logger.py b/cv/detection/yolov3/pytorch/common_utils/metric_logger.py
deleted file mode 100644
index 960641c4da4a8c94aa418c44a22ef5ded8908392..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/common_utils/metric_logger.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2022 Iluvatar CoreX. All rights reserved.
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-
-from collections import defaultdict
-import datetime
-import time
-
-import torch
-from .smooth_value import SmoothedValue
-
-"""
-Examples:
-
-logger = MetricLogger(" ")
-
->>> # For iter dataloader
->>> metric_logger.add_meter('img/s', utils.SmoothedValue(window_size=10, fmt='{value}'))
->>> header = 'Epoch: [{}]'.format(epoch)
->>> for image, target in metric_logger.log_every(data_loader, print_freq, header):
->>> ...
->>> logger.metric_logger.meters['img/s'].update(fps)
-
-"""
-
-class MetricLogger(object):
-
- def __init__(self, delimiter="\t"):
- self.meters = defaultdict(SmoothedValue)
- self.delimiter = delimiter
-
- def update(self, **kwargs):
- for k, v in kwargs.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
- assert isinstance(v, (float, int))
- self.meters[k].update(v)
-
- def __getattr__(self, attr):
- if attr in self.meters:
- return self.meters[attr]
- if attr in self.__dict__:
- return self.__dict__[attr]
- raise AttributeError("'{}' object has no attribute '{}'".format(
- type(self).__name__, attr))
-
- def __str__(self):
- loss_str = []
- for name, meter in self.meters.items():
- loss_str.append(
- "{}: {}".format(name, str(meter))
- )
- return self.delimiter.join(loss_str)
-
- def synchronize_between_processes(self):
- for meter in self.meters.values():
- meter.synchronize_between_processes()
-
- def add_meter(self, name, meter):
- self.meters[name] = meter
-
- def log_every(self, iterable, print_freq, header=None):
- i = 0
- if not header:
- header = ''
- start_time = time.time()
- end = time.time()
- iter_time = SmoothedValue(fmt='{avg:.4f}')
- data_time = SmoothedValue(fmt='{avg:.4f}')
- space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
- log_msg = self.delimiter.join([
- header,
- '[{0' + space_fmt + '}/{1}]',
- 'eta: {eta}',
- '{meters}',
- 'time: {time}',
- 'data: {data}'
- ])
- for obj in iterable:
- data_time.update(time.time() - end)
- yield obj
- iter_time.update(time.time() - end)
- if i % print_freq == 0:
- eta_seconds = iter_time.global_avg * (len(iterable) - i)
- eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
- print(log_msg.format(
- i, len(iterable), eta=eta_string,
- meters=str(self),
- time=str(iter_time), data=str(data_time)))
- i += 1
- end = time.time()
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('{} Total time: {}'.format(header, total_time_str))
diff --git a/cv/detection/yolov3/pytorch/common_utils/misc.py b/cv/detection/yolov3/pytorch/common_utils/misc.py
deleted file mode 100644
index c9b501cf8f6267488d002f2ae2a526c56ad2c392..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/common_utils/misc.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import os
-import sys
-import errno
-
-
-def mkdir(path):
- try:
- os.makedirs(path)
- except OSError as e:
- if e.errno != errno.EEXIST:
- raise
\ No newline at end of file
diff --git a/cv/detection/yolov3/pytorch/common_utils/smooth_value.py b/cv/detection/yolov3/pytorch/common_utils/smooth_value.py
deleted file mode 100644
index 7c5fa1179da79e495eae3d591c322865a76b6dfc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/common_utils/smooth_value.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright (c) 2022 Iluvatar CoreX. All rights reserved.
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-
-from collections import defaultdict, deque
-import datetime
-import errno
-import os
-import time
-
-import torch
-import torch.distributed as dist
-from .dist import is_dist_avail_and_initialized
-
-
-class SmoothedValue(object):
- """Track a series of values and provide access to smoothed values over a
- window or the global series average.
- """
-
- def __init__(self, window_size=20, fmt=None):
- if fmt is None:
- fmt = "{median:.4f} ({global_avg:.4f})"
- self.deque = deque(maxlen=window_size)
- self.total = 0.0
- self.count = 0
- self.fmt = fmt
-
- def update(self, value, n=1):
- self.deque.append(value)
- self.count += n
- self.total += value * n
-
- def synchronize_between_processes(self):
- """
- Warning: does not synchronize the deque!
- """
- if not is_dist_avail_and_initialized():
- return
- t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')
- dist.barrier()
- dist.all_reduce(t)
- t = t.tolist()
- self.count = int(t[0])
- self.total = t[1]
-
- @property
- def median(self):
- d = torch.tensor(list(self.deque))
- return d.median().item()
-
- @property
- def avg(self):
- d = torch.tensor(list(self.deque), dtype=torch.float32)
- return d.mean().item()
-
- @property
- def global_avg(self):
- return self.total / self.count
-
- @property
- def max(self):
- return max(self.deque)
-
- @property
- def value(self):
- return self.deque[-1]
-
- def __str__(self):
- return self.fmt.format(
- median=self.median,
- avg=self.avg,
- global_avg=self.global_avg,
- max=self.max,
- value=self.value)
\ No newline at end of file
diff --git a/cv/detection/yolov3/pytorch/config/coco.data b/cv/detection/yolov3/pytorch/config/coco.data
deleted file mode 100644
index 18beac135320c9c805d7a013a409275e21479c21..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/coco.data
+++ /dev/null
@@ -1,6 +0,0 @@
-classes= 80
-train=data/coco/trainvalno5k.txt
-valid=data/coco/5k.txt
-names=data/coco.names
-backup=backup/
-eval=coco
diff --git a/cv/detection/yolov3/pytorch/config/create_custom_model.sh b/cv/detection/yolov3/pytorch/config/create_custom_model.sh
deleted file mode 100644
index b28ec977674ef60e531ea651e9ae8bd866326ca1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/create_custom_model.sh
+++ /dev/null
@@ -1,794 +0,0 @@
-#!/bin/bash
-
-NUM_CLASSES=$1
-
-echo "
-[net]
-# Testing
-#batch=1
-#subdivisions=1
-# Training
-batch=64
-subdivisions=1
-width=416
-height=416
-channels=3
-momentum=0.9
-decay=0.0005
-angle=0
-saturation = 1.5
-exposure = 1.5
-hue=.1
-
-learning_rate=0.001
-burn_in=1000
-max_batches = 500200
-policy=steps
-steps=400000,450000
-scales=.1,.1
-
-[convolutional]
-batch_normalize=1
-filters=32
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=32
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-######################
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=$(expr 3 \* $(expr $NUM_CLASSES \+ 5))
-activation=linear
-
-
-[yolo]
-mask = 6,7,8
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=$NUM_CLASSES
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-
-[route]
-layers = -4
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[upsample]
-stride=2
-
-[route]
-layers = -1, 61
-
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=$(expr 3 \* $(expr $NUM_CLASSES \+ 5))
-activation=linear
-
-
-[yolo]
-mask = 3,4,5
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=$NUM_CLASSES
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-
-
-[route]
-layers = -4
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[upsample]
-stride=2
-
-[route]
-layers = -1, 36
-
-
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=$(expr 3 \* $(expr $NUM_CLASSES \+ 5))
-activation=linear
-
-
-[yolo]
-mask = 0,1,2
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=$NUM_CLASSES
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-" >> yolov3-custom.cfg
diff --git a/cv/detection/yolov3/pytorch/config/custom.data b/cv/detection/yolov3/pytorch/config/custom.data
deleted file mode 100644
index 846fad7410a6957ac7e9d96b46f584864d370fab..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/custom.data
+++ /dev/null
@@ -1,4 +0,0 @@
-classes= 1
-train=data/custom/train.txt
-valid=data/custom/valid.txt
-names=data/custom/classes.names
diff --git a/cv/detection/yolov3/pytorch/config/voc.data b/cv/detection/yolov3/pytorch/config/voc.data
deleted file mode 100644
index 7e26da98239b3d8af3f0e587efe0775b5cf3cb08..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/voc.data
+++ /dev/null
@@ -1,4 +0,0 @@
-classes= 20
-train=data/voc/train.txt
-valid=data/voc/valid.txt
-names=data/voc.names
diff --git a/cv/detection/yolov3/pytorch/config/yolov3-tiny.cfg b/cv/detection/yolov3/pytorch/config/yolov3-tiny.cfg
deleted file mode 100644
index 23e0bf27384a42f03ee56a0674bb145e70469b0f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/yolov3-tiny.cfg
+++ /dev/null
@@ -1,206 +0,0 @@
-[net]
-# Testing
-#batch=1
-#subdivisions=1
-# Training
-batch=32
-subdivisions=2
-width=416
-height=416
-channels=3
-momentum=0.9
-decay=0.0005
-angle=0
-saturation = 1.5
-exposure = 1.5
-hue=.1
-
-learning_rate=0.001
-burn_in=0
-max_batches = 500200
-policy=steps
-steps=400000,450000
-scales=.1,.1
-
-# 0
-[convolutional]
-batch_normalize=1
-filters=16
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 1
-[maxpool]
-size=2
-stride=2
-
-# 2
-[convolutional]
-batch_normalize=1
-filters=32
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 3
-[maxpool]
-size=2
-stride=2
-
-# 4
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 5
-[maxpool]
-size=2
-stride=2
-
-# 6
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 7
-[maxpool]
-size=2
-stride=2
-
-# 8
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 9
-[maxpool]
-size=2
-stride=2
-
-# 10
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 11
-[maxpool]
-size=2
-stride=1
-
-# 12
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-###########
-
-# 13
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-# 14
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 15
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=255
-activation=linear
-
-
-
-# 16
-[yolo]
-mask = 3,4,5
-anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
-classes=80
-num=6
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-# 17
-[route]
-layers = -4
-
-# 18
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-# 19
-[upsample]
-stride=2
-
-# 20
-[route]
-layers = -1, 8
-
-# 21
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# 22
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=255
-activation=linear
-
-# 23
-[yolo]
-mask = 1,2,3
-anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
-classes=80
-num=6
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
diff --git a/cv/detection/yolov3/pytorch/config/yolov3-voc.cfg b/cv/detection/yolov3/pytorch/config/yolov3-voc.cfg
deleted file mode 100644
index f026506cb06c487502382b887d78ad4114607570..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/yolov3-voc.cfg
+++ /dev/null
@@ -1,790 +0,0 @@
-
-[net]
-# Testing
-#batch=1
-#subdivisions=1
-# Training
-batch=32
-subdivisions=1
-width=416
-height=416
-channels=3
-momentum=0.9
-decay=0.0005
-angle=0
-saturation = 1.5
-exposure = 1.5
-hue=.1
-
-learning_rate=0.001
-burn_in=1000
-max_batches = 500200
-policy=steps
-steps=400000,450000
-scales=.1,.1
-
-[convolutional]
-batch_normalize=1
-filters=32
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=32
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-######################
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=75
-activation=linear
-
-
-[yolo]
-mask = 6,7,8
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=20
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-
-[route]
-layers = -4
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[upsample]
-stride=2
-
-[route]
-layers = -1, 61
-
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=75
-activation=linear
-
-
-[yolo]
-mask = 3,4,5
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=20
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-
-
-[route]
-layers = -4
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[upsample]
-stride=2
-
-[route]
-layers = -1, 36
-
-
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=75
-activation=linear
-
-
-[yolo]
-mask = 0,1,2
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=20
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
diff --git a/cv/detection/yolov3/pytorch/config/yolov3.cfg b/cv/detection/yolov3/pytorch/config/yolov3.cfg
deleted file mode 100644
index 799a05f91efe02c09eb0ecba622c436e1135630f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/config/yolov3.cfg
+++ /dev/null
@@ -1,788 +0,0 @@
-[net]
-# Testing
-#batch=1
-#subdivisions=1
-# Training
-batch=32
-subdivisions=1
-width=416
-height=416
-channels=3
-momentum=0.9
-decay=0.0005
-angle=0
-saturation = 1.5
-exposure = 1.5
-hue=.1
-
-learning_rate=0.001
-burn_in=0
-max_batches = 500200
-policy=steps
-steps=400000,450000
-scales=.1,.1
-
-[convolutional]
-batch_normalize=1
-filters=32
-size=3
-stride=1
-pad=1
-activation=leaky
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=32
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=64
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-# Downsample
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=2
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=1024
-size=3
-stride=1
-pad=1
-activation=leaky
-
-[shortcut]
-from=-3
-activation=linear
-
-######################
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=512
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=1024
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=255
-activation=linear
-
-
-[yolo]
-mask = 6,7,8
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=80
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-
-[route]
-layers = -4
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[upsample]
-stride=2
-
-[route]
-layers = -1, 61
-
-
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=256
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=512
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=255
-activation=linear
-
-
-[yolo]
-mask = 3,4,5
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=80
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
-
-
-
-[route]
-layers = -4
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[upsample]
-stride=2
-
-[route]
-layers = -1, 36
-
-
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-filters=128
-size=1
-stride=1
-pad=1
-activation=leaky
-
-[convolutional]
-batch_normalize=1
-size=3
-stride=1
-pad=1
-filters=256
-activation=leaky
-
-[convolutional]
-size=1
-stride=1
-pad=1
-filters=255
-activation=linear
-
-
-[yolo]
-mask = 0,1,2
-anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
-classes=80
-num=9
-jitter=.3
-ignore_thresh = .7
-truth_thresh = 1
-random=1
diff --git a/cv/detection/yolov3/pytorch/data/coco.names b/cv/detection/yolov3/pytorch/data/coco.names
deleted file mode 100755
index ca76c80b5b2cd0b25047f75736656cfebc9da7aa..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/coco.names
+++ /dev/null
@@ -1,80 +0,0 @@
-person
-bicycle
-car
-motorbike
-aeroplane
-bus
-train
-truck
-boat
-traffic light
-fire hydrant
-stop sign
-parking meter
-bench
-bird
-cat
-dog
-horse
-sheep
-cow
-elephant
-bear
-zebra
-giraffe
-backpack
-umbrella
-handbag
-tie
-suitcase
-frisbee
-skis
-snowboard
-sports ball
-kite
-baseball bat
-baseball glove
-skateboard
-surfboard
-tennis racket
-bottle
-wine glass
-cup
-fork
-knife
-spoon
-bowl
-banana
-apple
-sandwich
-orange
-broccoli
-carrot
-hot dog
-pizza
-donut
-cake
-chair
-sofa
-pottedplant
-bed
-diningtable
-toilet
-tvmonitor
-laptop
-mouse
-remote
-keyboard
-cell phone
-microwave
-oven
-toaster
-sink
-refrigerator
-book
-clock
-vase
-scissors
-teddy bear
-hair drier
-toothbrush
diff --git a/cv/detection/yolov3/pytorch/data/custom/classes.names b/cv/detection/yolov3/pytorch/data/custom/classes.names
deleted file mode 100755
index 08afa186cb88abb8a60b0e79cad6b9a4fb0ad692..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/custom/classes.names
+++ /dev/null
@@ -1 +0,0 @@
-train
diff --git a/cv/detection/yolov3/pytorch/data/custom/images/train.jpg b/cv/detection/yolov3/pytorch/data/custom/images/train.jpg
deleted file mode 100755
index d8329671d085536572c6afdab8087fa9bb5473e9..0000000000000000000000000000000000000000
Binary files a/cv/detection/yolov3/pytorch/data/custom/images/train.jpg and /dev/null differ
diff --git a/cv/detection/yolov3/pytorch/data/custom/labels/train.txt b/cv/detection/yolov3/pytorch/data/custom/labels/train.txt
deleted file mode 100755
index 3bf4be494b4d7f1cbeac770e62d93add326cd7d7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/custom/labels/train.txt
+++ /dev/null
@@ -1 +0,0 @@
-0 0.515 0.5 0.21694873 0.18286777
diff --git a/cv/detection/yolov3/pytorch/data/custom/train.txt b/cv/detection/yolov3/pytorch/data/custom/train.txt
deleted file mode 100755
index 7fa5443e63d66f1ffc6ac92a729c5fc7fa32f3bd..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/custom/train.txt
+++ /dev/null
@@ -1 +0,0 @@
-data/custom/images/train.jpg
diff --git a/cv/detection/yolov3/pytorch/data/custom/valid.txt b/cv/detection/yolov3/pytorch/data/custom/valid.txt
deleted file mode 100755
index 7fa5443e63d66f1ffc6ac92a729c5fc7fa32f3bd..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/custom/valid.txt
+++ /dev/null
@@ -1 +0,0 @@
-data/custom/images/train.jpg
diff --git a/cv/detection/yolov3/pytorch/data/get_coco_dataset.sh b/cv/detection/yolov3/pytorch/data/get_coco_dataset.sh
deleted file mode 100755
index 5d1c040f67b136e99ae08719fdc15867fafe2021..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/get_coco_dataset.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-
-
-mkdir -p coco/images
-cd coco/images
-
-# Download Images
-wget -c http://10.150.9.95/swapp/datasets/cv/detection/coco2014/train2014.zip
-wget -c http://10.150.9.95/swapp/datasets/cv/detection/coco2014/val2014.zip
-wget -c http://10.150.9.95/swapp/datasets/cv/detection/coco2014/labels.tgz
-
-# Unzip
-unzip -q train2014.zip
-unzip -q val2014.zip
-tar xzf labels.tgz
-
-cd ..
-wget -c "https://pjreddie.com/media/files/coco/5k.part"
-wget -c "https://pjreddie.com/media/files/coco/trainvalno5k.part"
-
-
-# Set Up Image Lists
-paste <(awk "{print \"$PWD\"}" <5k.part) 5k.part | tr -d '\t' > 5k.txt
-paste <(awk "{print \"$PWD\"}" trainvalno5k.txt
diff --git a/cv/detection/yolov3/pytorch/data/voc.names b/cv/detection/yolov3/pytorch/data/voc.names
deleted file mode 100755
index 1168c39990e4604bb76326833eb7814ed275fcec..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/voc.names
+++ /dev/null
@@ -1,20 +0,0 @@
-aeroplane
-bicycle
-bird
-boat
-bottle
-bus
-car
-cat
-chair
-cow
-diningtable
-dog
-horse
-motorbike
-person
-pottedplant
-sheep
-sofa
-train
-tvmonitor
\ No newline at end of file
diff --git a/cv/detection/yolov3/pytorch/data/voc/train.txt b/cv/detection/yolov3/pytorch/data/voc/train.txt
deleted file mode 100755
index 89d49e3dd1b0c7e065c5d23bf465ca43e9b4ce6c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/voc/train.txt
+++ /dev/null
@@ -1,16551 +0,0 @@
-./VOC/train/VOCdevkit/VOC2007/images/000005.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000007.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000012.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000016.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000017.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000019.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000021.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000026.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000030.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000032.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000033.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000034.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000035.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000041.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000044.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000046.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000047.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000048.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000050.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000051.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000052.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000060.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000063.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000065.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000066.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000077.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000081.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000083.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000089.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000095.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000099.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000101.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000102.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000104.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000107.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000109.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000110.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000112.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000113.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000118.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000120.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000122.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000123.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000125.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000130.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000131.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000132.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000133.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000134.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000141.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000142.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000143.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000147.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000153.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000154.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000156.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000158.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000161.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000162.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000164.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000165.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000169.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000170.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000173.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000180.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000187.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000190.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000192.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000193.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000198.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000203.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000207.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000210.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000211.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000218.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000219.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000220.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000221.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000222.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000225.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000228.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000232.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000233.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000235.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000242.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000245.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000246.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000249.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000251.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000256.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000257.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000262.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000266.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000268.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000275.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000276.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000278.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000282.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000288.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000289.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000294.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000298.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000302.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000303.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000304.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000305.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000306.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000308.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000312.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000317.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000320.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000321.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000322.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000328.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000329.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000331.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000332.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000334.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000336.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000338.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000340.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000344.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000347.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000352.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000354.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000355.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000363.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000367.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000370.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000372.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000373.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000379.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000380.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000381.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000382.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000394.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000395.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000396.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000400.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000403.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000406.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000407.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000408.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000411.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000416.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000420.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000427.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000428.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000431.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000435.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000446.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000448.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000454.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000459.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000460.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000462.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000463.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000464.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000469.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000474.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000476.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000477.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000480.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000482.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000486.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000489.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000491.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000492.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000494.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000496.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000498.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000500.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000501.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000503.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000508.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000513.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000514.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000515.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000516.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000520.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000522.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000525.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000528.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000531.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000535.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000540.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000541.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000543.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000544.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000545.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000550.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000552.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000554.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000555.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000559.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000563.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000564.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000577.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000581.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000582.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000583.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000589.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000590.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000591.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000592.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000597.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000598.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000599.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000601.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000608.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000610.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000612.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000613.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000619.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000620.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000622.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000625.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000626.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000628.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000632.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000633.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000635.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000637.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000645.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000648.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000656.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000657.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000660.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000661.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000663.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000671.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000672.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000675.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000676.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000677.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000680.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000684.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000686.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000688.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000689.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000690.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000694.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000700.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000705.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000707.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000710.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000711.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000712.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000717.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000720.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000726.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000728.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000729.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000730.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000731.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000733.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000739.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000740.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000742.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000746.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000750.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000752.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000753.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000754.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000756.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000761.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000763.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000764.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000767.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000770.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000771.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000774.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000776.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000777.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000780.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000782.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000787.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000791.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000793.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000797.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000800.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000802.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000804.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000805.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000806.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000808.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000812.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000815.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000816.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000818.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000820.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000822.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000823.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000827.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000829.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000830.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000831.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000832.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000842.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000843.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000849.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000850.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000851.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000854.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000857.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000860.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000862.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000863.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000865.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000871.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000874.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000876.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000880.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000882.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000885.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000887.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000888.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000889.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000892.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000895.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000896.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000898.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000900.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000902.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000903.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000904.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000906.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000908.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000912.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000915.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000917.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000918.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000920.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000921.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000923.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000926.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000929.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000934.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000936.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000937.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000943.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000946.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000949.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000950.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000951.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000962.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000964.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000965.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000966.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000967.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000972.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000973.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000977.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000980.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000982.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000989.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000991.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000993.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000996.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000997.jpg
-./VOC/train/VOCdevkit/VOC2007/images/000999.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001001.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001002.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001008.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001010.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001011.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001012.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001014.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001015.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001017.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001018.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001027.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001028.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001041.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001043.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001050.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001052.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001053.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001056.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001057.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001060.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001062.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001066.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001068.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001069.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001071.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001074.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001077.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001079.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001082.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001083.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001084.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001092.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001097.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001101.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001102.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001104.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001106.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001107.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001109.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001110.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001112.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001113.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001119.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001124.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001125.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001127.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001130.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001136.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001137.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001142.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001143.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001144.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001145.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001147.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001148.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001149.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001151.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001152.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001154.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001156.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001158.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001160.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001161.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001164.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001166.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001168.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001170.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001172.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001175.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001176.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001182.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001187.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001192.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001199.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001201.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001203.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001204.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001205.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001206.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001207.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001211.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001221.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001225.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001226.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001230.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001231.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001233.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001234.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001237.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001239.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001240.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001248.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001254.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001265.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001266.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001268.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001273.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001274.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001277.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001286.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001287.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001288.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001289.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001290.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001292.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001293.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001294.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001298.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001299.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001304.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001309.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001310.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001312.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001314.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001315.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001316.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001324.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001326.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001330.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001332.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001333.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001334.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001341.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001345.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001346.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001348.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001352.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001360.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001361.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001362.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001364.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001371.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001375.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001378.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001383.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001384.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001385.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001386.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001388.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001390.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001393.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001395.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001397.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001400.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001402.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001405.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001406.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001408.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001409.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001414.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001418.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001420.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001426.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001427.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001432.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001434.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001441.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001442.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001444.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001451.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001453.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001455.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001457.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001460.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001463.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001464.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001467.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001472.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001475.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001479.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001480.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001481.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001485.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001486.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001488.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001490.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001492.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001493.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001494.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001498.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001501.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001504.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001510.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001512.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001514.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001515.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001517.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001521.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001522.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001528.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001529.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001531.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001532.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001536.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001539.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001541.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001543.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001544.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001545.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001548.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001553.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001554.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001555.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001556.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001557.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001559.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001561.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001563.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001571.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001577.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001580.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001582.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001590.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001593.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001594.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001595.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001597.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001598.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001604.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001607.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001608.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001610.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001612.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001614.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001617.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001622.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001627.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001628.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001630.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001632.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001633.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001638.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001640.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001642.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001643.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001649.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001650.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001651.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001661.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001662.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001669.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001673.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001675.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001676.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001677.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001678.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001680.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001683.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001684.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001686.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001688.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001689.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001690.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001693.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001707.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001708.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001711.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001717.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001721.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001723.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001724.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001725.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001726.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001729.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001730.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001733.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001734.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001739.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001741.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001746.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001750.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001752.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001754.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001756.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001758.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001759.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001761.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001765.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001766.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001771.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001775.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001777.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001778.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001780.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001782.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001784.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001785.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001787.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001789.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001793.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001795.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001797.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001800.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001801.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001806.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001807.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001809.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001816.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001818.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001821.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001825.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001827.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001830.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001832.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001833.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001837.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001840.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001842.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001843.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001849.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001853.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001854.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001858.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001860.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001861.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001862.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001864.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001870.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001875.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001877.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001881.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001882.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001887.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001888.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001892.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001894.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001896.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001898.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001901.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001902.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001903.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001904.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001906.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001907.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001915.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001918.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001920.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001922.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001927.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001928.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001930.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001933.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001934.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001936.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001937.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001938.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001941.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001944.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001945.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001950.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001952.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001960.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001962.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001964.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001970.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001972.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001976.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001977.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001978.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001980.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001981.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001982.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001985.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001989.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001995.jpg
-./VOC/train/VOCdevkit/VOC2007/images/001999.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002000.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002001.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002002.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002006.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002011.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002012.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002015.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002019.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002021.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002022.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002025.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002027.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002030.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002034.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002037.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002043.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002047.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002049.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002051.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002054.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002055.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002056.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002063.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002067.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002068.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002069.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002070.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002082.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002083.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002086.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002088.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002090.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002094.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002095.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002096.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002098.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002099.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002101.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002102.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002104.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002109.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002112.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002114.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002116.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002120.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002124.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002125.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002126.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002132.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002134.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002135.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002136.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002139.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002142.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002145.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002151.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002152.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002153.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002155.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002156.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002158.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002165.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002166.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002169.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002170.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002172.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002176.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002178.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002179.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002180.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002181.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002182.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002183.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002187.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002190.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002192.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002193.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002196.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002197.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002199.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002201.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002202.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002213.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002218.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002219.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002220.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002221.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002226.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002228.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002233.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002234.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002237.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002238.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002248.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002249.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002251.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002253.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002255.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002256.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002257.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002261.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002265.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002266.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002267.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002268.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002273.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002276.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002277.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002278.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002280.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002287.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002288.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002290.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002291.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002293.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002300.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002302.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002305.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002306.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002308.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002310.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002315.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002320.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002321.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002324.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002328.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002329.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002330.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002332.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002333.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002334.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002335.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002340.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002342.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002345.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002347.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002348.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002352.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002354.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002355.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002361.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002362.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002364.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002366.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002367.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002368.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002369.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002371.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002372.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002373.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002375.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002376.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002377.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002378.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002382.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002384.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002385.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002392.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002393.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002401.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002403.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002405.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002407.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002410.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002411.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002415.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002420.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002423.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002425.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002427.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002435.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002437.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002441.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002442.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002444.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002448.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002452.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002454.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002456.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002458.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002459.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002460.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002462.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002471.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002472.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002476.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002477.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002478.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002479.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002480.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002481.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002490.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002491.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002492.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002493.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002494.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002496.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002500.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002501.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002502.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002504.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002505.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002508.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002512.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002513.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002514.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002520.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002525.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002529.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002533.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002534.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002539.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002540.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002542.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002544.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002545.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002546.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002547.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002554.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002555.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002558.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002559.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002561.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002563.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002564.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002566.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002567.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002569.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002571.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002572.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002578.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002584.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002589.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002590.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002593.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002594.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002595.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002598.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002599.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002600.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002613.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002615.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002621.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002625.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002627.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002632.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002633.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002634.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002635.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002637.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002641.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002643.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002645.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002646.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002648.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002649.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002657.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002658.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002659.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002662.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002664.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002666.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002668.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002669.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002670.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002675.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002677.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002678.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002680.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002683.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002684.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002689.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002690.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002693.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002696.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002697.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002704.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002706.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002710.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002715.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002717.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002721.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002722.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002723.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002730.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002734.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002737.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002741.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002744.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002745.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002751.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002757.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002759.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002762.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002763.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002765.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002766.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002767.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002774.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002775.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002776.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002778.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002779.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002782.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002783.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002784.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002785.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002791.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002795.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002798.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002800.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002801.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002803.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002804.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002807.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002812.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002815.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002816.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002817.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002820.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002827.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002833.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002835.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002838.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002842.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002844.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002854.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002858.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002864.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002866.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002869.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002870.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002873.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002875.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002880.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002881.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002884.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002889.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002891.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002893.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002896.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002901.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002906.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002910.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002912.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002913.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002914.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002915.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002916.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002917.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002924.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002933.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002934.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002937.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002938.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002941.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002942.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002943.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002944.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002946.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002952.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002953.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002956.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002957.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002960.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002962.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002965.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002966.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002967.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002969.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002975.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002976.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002977.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002978.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002984.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002986.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002988.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002989.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002990.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002992.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002994.jpg
-./VOC/train/VOCdevkit/VOC2007/images/002995.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003000.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003002.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003003.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003005.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003007.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003008.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003011.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003013.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003015.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003017.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003021.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003027.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003028.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003031.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003032.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003034.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003038.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003044.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003047.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003051.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003053.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003054.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003056.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003057.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003063.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003065.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003066.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003074.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003077.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003082.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003083.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003085.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003086.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003088.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003089.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003090.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003092.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003094.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003098.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003100.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003102.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003103.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003105.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003106.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003107.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003110.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003112.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003116.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003118.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003120.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003122.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003124.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003126.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003127.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003133.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003134.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003135.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003137.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003142.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003145.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003147.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003149.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003154.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003155.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003157.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003161.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003162.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003164.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003165.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003169.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003170.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003175.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003176.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003178.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003181.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003183.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003188.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003195.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003199.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003202.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003204.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003205.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003207.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003210.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003211.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003213.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003216.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003218.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003219.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003223.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003228.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003231.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003233.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003239.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003240.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003242.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003243.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003253.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003254.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003255.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003256.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003261.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003262.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003271.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003273.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003274.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003280.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003282.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003290.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003292.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003293.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003294.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003299.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003300.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003301.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003303.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003308.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003313.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003316.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003320.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003330.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003331.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003335.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003336.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003338.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003339.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003344.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003354.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003355.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003356.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003360.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003362.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003363.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003367.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003369.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003370.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003373.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003376.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003377.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003379.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003380.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003382.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003386.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003390.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003392.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003395.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003396.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003397.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003398.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003401.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003403.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003406.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003407.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003408.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003410.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003412.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003415.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003416.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003420.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003422.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003425.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003429.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003435.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003441.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003444.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003449.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003451.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003452.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003453.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003455.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003458.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003462.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003464.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003469.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003477.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003487.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003489.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003491.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003492.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003493.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003496.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003500.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003506.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003508.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003510.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003511.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003516.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003521.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003522.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003525.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003528.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003529.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003536.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003539.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003546.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003548.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003550.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003551.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003554.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003555.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003556.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003564.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003566.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003567.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003575.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003577.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003580.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003587.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003589.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003593.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003594.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003596.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003597.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003599.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003604.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003608.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003614.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003620.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003621.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003622.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003623.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003625.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003627.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003628.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003629.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003632.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003634.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003635.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003638.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003639.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003640.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003642.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003644.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003645.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003646.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003648.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003651.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003655.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003656.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003657.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003658.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003660.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003662.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003663.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003664.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003669.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003671.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003673.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003674.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003675.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003678.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003679.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003681.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003684.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003688.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003690.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003694.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003696.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003698.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003700.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003703.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003704.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003705.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003706.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003708.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003711.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003717.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003721.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003722.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003729.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003740.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003743.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003750.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003751.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003752.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003753.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003754.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003758.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003759.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003763.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003767.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003774.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003779.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003780.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003781.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003783.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003784.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003788.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003790.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003791.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003792.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003793.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003797.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003798.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003803.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003806.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003807.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003808.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003809.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003811.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003817.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003818.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003820.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003821.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003824.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003827.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003830.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003835.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003837.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003838.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003844.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003846.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003849.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003856.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003857.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003860.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003861.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003863.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003865.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003866.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003869.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003871.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003874.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003876.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003877.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003885.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003887.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003889.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003890.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003891.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003895.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003898.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003905.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003907.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003912.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003913.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003915.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003918.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003921.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003923.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003924.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003926.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003936.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003937.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003941.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003945.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003946.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003949.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003953.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003956.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003957.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003960.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003961.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003965.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003966.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003969.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003970.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003973.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003974.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003979.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003983.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003984.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003986.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003988.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003990.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003991.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003992.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003993.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003994.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003996.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003997.jpg
-./VOC/train/VOCdevkit/VOC2007/images/003998.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004003.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004005.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004008.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004010.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004011.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004012.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004013.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004014.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004015.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004016.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004017.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004019.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004025.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004028.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004031.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004033.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004034.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004035.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004037.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004046.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004047.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004051.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004052.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004057.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004060.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004066.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004067.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004069.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004075.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004076.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004077.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004082.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004085.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004087.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004089.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004092.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004095.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004100.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004102.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004105.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004106.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004110.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004111.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004113.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004120.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004122.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004131.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004133.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004135.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004136.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004137.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004141.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004142.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004143.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004145.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004148.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004149.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004152.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004158.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004164.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004168.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004169.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004170.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004178.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004190.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004192.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004193.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004195.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004196.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004201.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004203.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004204.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004205.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004220.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004221.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004223.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004228.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004230.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004231.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004232.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004237.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004239.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004242.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004246.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004253.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004255.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004256.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004257.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004264.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004265.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004271.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004273.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004274.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004275.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004280.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004283.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004286.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004287.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004291.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004292.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004293.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004295.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004298.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004300.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004303.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004304.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004310.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004312.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004315.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004321.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004322.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004326.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004329.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004331.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004333.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004338.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004339.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004341.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004345.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004346.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004347.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004352.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004354.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004356.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004360.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004361.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004364.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004367.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004368.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004369.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004370.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004371.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004372.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004376.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004379.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004380.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004384.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004386.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004389.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004390.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004392.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004396.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004397.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004405.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004409.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004411.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004423.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004429.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004432.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004434.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004437.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004441.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004446.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004452.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004455.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004457.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004459.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004463.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004464.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004471.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004474.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004479.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004481.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004487.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004488.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004490.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004493.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004494.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004495.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004496.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004498.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004500.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004502.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004507.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004508.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004510.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004512.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004514.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004517.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004520.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004527.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004528.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004532.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004535.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004539.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004540.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004542.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004544.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004548.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004551.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004552.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004553.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004555.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004558.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004562.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004563.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004566.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004570.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004571.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004574.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004581.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004584.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004587.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004591.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004592.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004595.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004597.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004600.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004601.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004604.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004607.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004612.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004622.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004623.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004625.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004626.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004627.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004628.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004630.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004631.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004632.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004634.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004643.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004644.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004648.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004649.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004651.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004652.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004655.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004656.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004660.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004662.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004671.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004672.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004673.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004674.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004675.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004676.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004679.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004683.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004686.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004687.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004689.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004692.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004693.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004694.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004701.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004705.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004706.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004707.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004708.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004710.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004715.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004719.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004722.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004723.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004737.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004742.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004743.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004746.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004750.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004753.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004754.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004761.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004770.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004776.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004777.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004779.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004782.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004783.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004785.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004788.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004789.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004790.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004792.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004793.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004797.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004801.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004805.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004808.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004812.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004815.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004816.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004818.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004823.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004825.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004830.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004831.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004832.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004837.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004839.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004840.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004842.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004846.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004849.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004850.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004852.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004856.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004857.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004863.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004866.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004869.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004873.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004876.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004882.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004885.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004890.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004895.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004896.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004897.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004898.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004902.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004903.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004905.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004907.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004910.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004912.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004913.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004916.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004926.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004928.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004929.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004936.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004938.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004943.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004946.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004950.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004951.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004953.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004955.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004956.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004960.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004961.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004962.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004966.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004967.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004968.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004972.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004973.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004974.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004976.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004977.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004982.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004983.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004984.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004985.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004986.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004990.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004991.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004992.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004994.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004995.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004997.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004998.jpg
-./VOC/train/VOCdevkit/VOC2007/images/004999.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005001.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005003.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005006.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005007.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005014.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005016.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005018.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005026.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005027.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005028.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005029.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005032.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005033.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005037.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005047.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005052.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005054.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005055.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005056.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005057.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005062.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005063.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005065.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005067.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005068.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005071.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005077.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005079.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005081.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005084.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005085.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005086.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005090.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005094.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005097.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005101.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005102.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005104.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005107.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005110.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005111.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005114.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005116.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005122.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005124.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005128.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005130.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005131.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005134.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005135.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005136.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005143.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005144.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005145.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005153.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005156.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005160.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005161.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005168.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005169.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005173.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005175.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005176.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005179.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005181.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005183.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005190.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005195.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005199.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005202.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005203.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005210.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005217.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005219.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005220.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005222.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005223.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005230.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005231.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005239.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005242.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005245.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005246.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005248.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005253.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005254.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005257.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005262.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005264.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005267.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005268.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005273.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005274.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005278.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005283.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005288.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005290.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005292.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005293.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005297.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005298.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005303.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005304.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005305.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005306.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005310.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005312.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005314.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005315.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005319.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005320.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005326.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005328.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005331.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005336.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005338.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005340.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005344.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005345.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005346.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005348.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005352.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005355.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005358.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005360.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005363.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005367.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005368.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005369.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005370.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005371.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005373.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005378.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005379.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005380.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005383.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005384.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005385.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005388.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005389.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005393.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005395.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005396.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005397.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005398.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005405.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005406.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005407.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005408.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005410.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005414.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005416.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005418.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005420.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005423.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005429.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005431.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005434.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005440.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005441.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005448.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005451.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005453.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005454.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005455.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005457.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005467.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005469.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005471.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005475.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005478.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005481.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005485.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005486.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005487.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005489.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005496.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005507.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005508.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005510.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005511.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005514.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005515.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005517.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005521.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005522.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005527.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005531.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005535.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005536.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005539.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005541.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005542.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005544.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005547.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005550.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005552.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005554.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005559.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005563.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005566.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005568.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005573.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005574.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005577.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005582.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005583.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005584.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005590.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005591.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005592.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005593.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005599.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005600.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005601.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005608.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005613.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005614.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005615.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005620.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005624.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005625.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005629.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005630.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005631.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005637.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005639.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005640.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005641.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005644.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005645.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005648.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005652.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005655.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005657.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005658.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005660.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005662.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005664.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005668.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005669.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005672.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005674.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005676.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005679.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005680.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005686.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005687.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005693.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005696.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005697.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005700.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005701.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005704.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005705.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005710.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005715.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005716.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005719.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005723.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005728.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005729.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005730.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005731.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005736.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005740.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005741.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005742.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005743.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005752.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005756.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005757.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005761.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005762.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005764.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005765.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005769.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005779.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005780.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005781.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005782.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005783.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005784.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005788.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005789.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005790.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005791.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005803.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005805.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005806.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005811.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005812.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005813.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005815.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005817.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005818.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005819.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005821.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005824.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005825.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005829.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005830.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005831.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005838.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005839.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005840.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005843.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005850.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005851.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005852.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005853.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005854.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005856.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005860.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005861.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005863.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005864.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005873.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005874.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005875.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005877.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005881.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005884.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005885.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005888.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005889.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005893.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005894.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005895.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005897.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005901.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005903.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005905.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005906.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005908.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005909.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005910.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005912.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005914.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005917.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005918.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005920.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005923.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005928.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005930.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005938.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005951.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005952.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005956.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005960.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005961.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005964.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005968.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005970.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005975.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005979.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005980.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005981.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005983.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005984.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005985.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005988.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005989.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005990.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005991.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005992.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005995.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005996.jpg
-./VOC/train/VOCdevkit/VOC2007/images/005998.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006000.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006001.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006005.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006011.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006012.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006018.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006025.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006026.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006027.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006028.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006029.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006030.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006033.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006035.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006038.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006041.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006043.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006046.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006055.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006062.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006065.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006066.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006067.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006069.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006070.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006071.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006074.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006079.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006084.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006088.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006089.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006095.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006096.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006097.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006098.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006100.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006103.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006104.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006105.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006107.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006111.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006120.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006123.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006124.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006125.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006128.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006130.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006131.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006133.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006134.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006135.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006136.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006139.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006141.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006148.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006151.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006153.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006156.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006158.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006161.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006162.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006166.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006170.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006172.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006175.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006176.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006179.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006180.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006181.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006183.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006187.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006188.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006190.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006196.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006198.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006201.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006202.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006203.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006206.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006210.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006216.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006218.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006219.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006220.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006221.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006222.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006223.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006225.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006230.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006233.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006234.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006235.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006238.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006240.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006243.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006249.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006251.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006252.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006254.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006261.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006262.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006264.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006267.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006275.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006276.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006277.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006282.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006286.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006289.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006290.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006291.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006295.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006299.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006300.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006301.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006304.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006305.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006306.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006309.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006314.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006319.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006320.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006321.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006329.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006330.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006335.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006338.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006339.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006341.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006344.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006346.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006348.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006352.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006353.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006355.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006357.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006362.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006363.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006366.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006367.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006369.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006371.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006375.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006377.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006381.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006382.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006385.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006392.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006395.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006396.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006398.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006400.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006404.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006409.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006411.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006418.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006425.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006427.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006428.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006429.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006434.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006437.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006440.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006442.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006444.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006447.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006448.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006449.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006455.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006456.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006458.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006459.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006462.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006463.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006472.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006473.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006474.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006475.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006476.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006480.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006482.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006486.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006488.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006492.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006495.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006501.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006503.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006506.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006507.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006512.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006515.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006520.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006529.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006532.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006534.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006536.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006538.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006542.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006543.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006547.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006548.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006550.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006551.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006553.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006556.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006560.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006562.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006564.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006569.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006570.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006572.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006575.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006578.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006583.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006584.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006587.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006593.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006595.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006597.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006599.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006602.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006610.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006612.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006617.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006619.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006621.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006622.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006625.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006626.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006627.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006628.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006631.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006632.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006635.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006637.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006638.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006643.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006645.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006648.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006652.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006657.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006658.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006660.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006661.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006664.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006666.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006668.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006670.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006671.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006673.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006674.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006677.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006678.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006679.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006681.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006684.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006687.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006689.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006690.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006694.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006696.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006697.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006698.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006703.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006704.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006706.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006707.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006708.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006714.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006719.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006722.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006725.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006726.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006730.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006731.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006734.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006736.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006739.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006740.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006751.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006753.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006759.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006761.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006762.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006765.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006766.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006769.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006777.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006781.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006782.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006783.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006784.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006789.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006797.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006800.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006802.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006803.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006805.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006806.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006808.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006813.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006819.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006821.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006822.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006824.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006825.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006827.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006829.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006833.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006835.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006838.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006839.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006840.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006842.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006844.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006849.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006850.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006852.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006858.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006860.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006862.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006864.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006865.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006866.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006869.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006874.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006876.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006880.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006883.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006884.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006887.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006892.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006893.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006896.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006900.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006903.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006908.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006909.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006910.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006912.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006914.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006916.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006917.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006918.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006922.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006924.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006930.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006933.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006934.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006943.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006944.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006945.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006949.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006950.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006952.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006953.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006956.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006959.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006962.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006965.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006966.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006968.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006972.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006976.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006981.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006983.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006988.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006989.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006990.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006994.jpg
-./VOC/train/VOCdevkit/VOC2007/images/006995.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007002.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007003.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007006.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007007.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007008.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007011.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007016.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007018.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007021.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007022.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007025.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007029.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007031.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007033.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007035.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007038.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007040.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007046.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007048.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007049.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007050.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007052.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007054.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007056.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007059.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007062.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007065.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007068.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007070.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007071.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007074.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007075.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007077.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007079.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007080.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007084.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007086.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007088.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007089.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007090.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007092.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007095.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007097.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007100.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007101.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007104.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007105.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007109.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007113.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007114.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007122.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007123.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007125.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007128.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007130.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007132.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007133.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007139.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007141.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007144.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007146.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007147.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007148.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007149.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007152.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007153.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007154.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007162.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007165.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007166.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007167.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007168.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007172.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007180.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007182.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007187.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007193.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007197.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007204.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007205.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007210.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007211.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007213.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007216.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007217.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007219.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007222.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007223.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007227.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007230.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007234.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007243.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007245.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007249.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007256.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007261.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007266.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007271.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007274.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007275.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007276.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007280.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007283.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007289.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007292.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007294.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007295.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007297.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007298.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007299.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007300.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007302.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007305.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007308.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007314.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007322.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007329.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007330.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007334.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007336.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007344.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007346.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007356.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007361.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007363.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007369.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007370.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007372.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007373.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007375.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007376.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007381.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007383.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007385.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007388.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007389.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007390.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007394.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007396.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007398.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007408.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007410.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007411.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007414.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007416.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007422.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007425.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007427.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007431.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007432.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007435.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007436.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007437.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007446.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007448.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007449.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007451.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007454.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007457.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007458.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007460.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007467.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007474.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007475.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007477.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007479.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007480.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007481.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007482.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007486.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007489.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007490.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007491.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007493.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007498.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007503.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007506.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007511.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007513.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007517.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007521.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007525.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007527.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007528.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007533.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007535.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007536.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007538.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007540.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007543.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007544.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007546.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007547.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007551.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007555.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007558.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007559.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007563.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007566.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007568.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007570.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007571.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007572.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007575.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007578.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007590.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007592.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007594.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007600.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007601.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007612.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007614.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007615.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007619.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007621.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007622.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007624.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007626.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007629.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007631.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007633.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007637.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007639.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007640.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007642.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007649.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007650.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007655.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007656.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007657.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007662.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007663.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007664.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007666.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007668.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007670.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007671.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007672.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007673.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007675.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007677.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007678.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007679.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007680.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007682.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007683.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007687.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007688.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007692.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007694.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007696.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007697.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007704.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007705.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007712.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007715.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007720.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007721.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007723.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007724.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007729.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007731.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007736.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007740.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007742.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007743.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007745.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007746.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007751.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007753.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007754.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007758.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007762.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007763.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007765.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007767.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007775.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007776.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007777.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007779.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007781.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007786.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007790.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007791.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007793.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007795.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007798.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007803.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007809.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007812.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007813.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007815.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007819.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007820.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007821.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007824.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007831.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007833.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007838.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007840.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007843.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007853.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007854.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007856.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007857.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007863.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007864.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007865.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007869.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007873.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007876.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007877.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007883.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007884.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007885.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007889.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007890.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007897.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007898.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007899.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007900.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007901.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007902.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007905.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007908.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007909.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007910.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007914.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007915.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007916.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007920.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007921.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007923.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007924.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007925.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007926.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007928.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007933.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007943.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007946.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007950.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007953.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007956.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007959.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007963.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007964.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007968.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007970.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007974.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007976.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007979.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007980.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007984.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007991.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007996.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007997.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007998.jpg
-./VOC/train/VOCdevkit/VOC2007/images/007999.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008001.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008002.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008005.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008008.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008009.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008012.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008017.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008019.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008023.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008026.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008029.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008031.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008032.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008033.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008037.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008040.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008043.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008044.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008048.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008049.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008051.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008053.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008057.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008060.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008061.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008062.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008063.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008067.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008068.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008069.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008075.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008076.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008079.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008082.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008083.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008084.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008085.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008086.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008087.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008093.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008095.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008096.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008098.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008100.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008101.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008103.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008105.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008106.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008107.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008112.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008115.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008116.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008122.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008125.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008127.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008130.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008132.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008137.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008139.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008140.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008141.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008142.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008144.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008151.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008160.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008164.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008166.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008168.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008169.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008171.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008173.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008175.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008180.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008188.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008190.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008197.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008199.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008202.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008203.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008204.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008211.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008213.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008216.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008218.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008220.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008222.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008223.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008225.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008226.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008229.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008232.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008235.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008241.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008248.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008251.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008252.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008253.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008254.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008258.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008260.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008261.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008262.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008263.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008268.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008275.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008280.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008282.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008284.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008292.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008293.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008294.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008295.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008297.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008299.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008300.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008301.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008302.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008306.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008310.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008311.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008312.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008313.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008315.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008316.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008317.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008319.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008320.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008322.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008326.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008329.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008332.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008335.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008336.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008338.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008341.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008342.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008345.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008346.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008355.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008360.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008364.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008368.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008370.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008372.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008376.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008381.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008384.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008385.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008386.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008387.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008388.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008390.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008391.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008397.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008398.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008403.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008409.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008410.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008415.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008416.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008422.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008423.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008425.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008426.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008427.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008429.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008430.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008434.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008437.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008442.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008444.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008449.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008450.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008452.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008453.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008454.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008456.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008462.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008467.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008472.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008475.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008477.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008478.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008482.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008483.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008485.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008492.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008494.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008495.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008498.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008502.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008503.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008506.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008509.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008512.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008513.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008514.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008517.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008521.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008522.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008529.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008530.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008533.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008534.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008535.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008536.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008541.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008542.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008550.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008553.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008556.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008557.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008558.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008559.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008562.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008564.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008568.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008572.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008573.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008581.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008582.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008584.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008587.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008592.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008595.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008596.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008601.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008602.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008604.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008606.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008607.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008608.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008610.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008612.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008615.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008617.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008620.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008621.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008624.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008628.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008633.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008635.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008638.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008639.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008644.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008645.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008653.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008655.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008663.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008665.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008670.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008676.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008680.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008683.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008687.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008688.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008690.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008692.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008698.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008701.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008706.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008710.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008716.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008717.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008720.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008722.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008723.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008725.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008727.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008728.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008730.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008731.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008733.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008739.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008741.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008742.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008744.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008750.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008752.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008753.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008756.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008757.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008759.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008760.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008764.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008766.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008768.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008769.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008770.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008771.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008775.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008776.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008783.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008784.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008790.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008793.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008799.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008801.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008805.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008806.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008809.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008811.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008813.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008814.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008815.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008817.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008819.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008822.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008823.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008826.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008831.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008833.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008835.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008837.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008838.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008840.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008843.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008847.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008849.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008854.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008856.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008858.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008862.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008865.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008871.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008873.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008874.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008876.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008880.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008883.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008884.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008885.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008888.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008890.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008891.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008892.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008900.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008905.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008909.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008913.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008914.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008917.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008919.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008920.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008921.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008923.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008926.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008927.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008929.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008930.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008931.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008933.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008936.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008942.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008943.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008944.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008948.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008951.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008953.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008955.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008960.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008961.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008962.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008965.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008966.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008967.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008968.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008969.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008970.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008971.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008973.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008975.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008976.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008978.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008979.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008980.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008982.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008983.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008985.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008987.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008988.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008989.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008995.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008997.jpg
-./VOC/train/VOCdevkit/VOC2007/images/008999.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009000.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009002.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009004.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009005.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009006.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009007.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009015.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009016.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009018.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009019.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009020.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009022.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009024.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009027.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009029.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009032.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009034.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009035.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009036.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009037.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009039.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009042.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009045.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009048.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009049.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009051.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009053.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009058.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009059.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009060.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009063.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009064.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009066.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009068.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009072.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009073.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009078.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009079.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009080.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009085.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009086.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009087.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009089.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009091.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009094.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009098.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009099.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009100.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009105.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009106.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009108.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009112.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009113.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009114.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009116.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009117.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009121.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009123.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009126.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009128.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009129.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009131.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009133.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009136.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009138.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009141.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009144.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009147.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009148.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009150.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009151.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009153.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009155.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009157.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009159.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009160.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009161.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009162.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009163.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009166.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009168.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009173.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009174.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009175.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009177.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009178.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009179.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009180.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009181.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009184.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009185.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009186.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009187.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009189.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009191.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009192.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009193.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009194.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009195.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009196.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009197.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009200.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009202.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009205.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009208.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009209.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009212.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009213.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009214.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009215.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009218.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009221.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009224.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009227.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009230.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009236.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009238.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009239.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009242.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009244.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009245.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009246.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009247.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009249.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009250.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009251.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009252.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009254.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009255.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009259.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009268.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009269.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009270.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009271.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009272.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009273.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009278.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009279.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009281.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009282.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009283.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009285.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009286.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009287.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009288.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009289.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009290.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009291.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009295.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009296.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009299.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009303.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009306.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009307.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009308.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009309.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009312.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009315.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009316.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009318.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009323.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009324.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009325.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009326.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009327.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009330.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009331.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009333.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009334.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009336.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009337.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009339.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009342.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009343.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009347.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009348.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009349.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009350.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009351.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009354.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009358.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009359.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009362.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009365.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009368.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009371.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009373.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009374.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009375.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009377.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009378.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009382.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009386.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009388.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009389.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009392.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009393.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009394.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009398.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009401.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009405.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009406.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009407.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009408.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009409.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009410.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009411.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009412.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009413.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009414.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009417.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009418.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009419.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009420.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009421.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009422.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009424.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009429.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009432.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009433.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009434.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009437.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009438.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009439.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009440.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009443.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009445.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009446.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009448.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009454.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009455.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009456.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009457.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009458.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009459.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009460.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009461.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009463.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009464.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009465.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009466.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009468.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009469.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009470.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009472.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009476.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009477.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009479.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009480.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009481.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009484.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009488.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009490.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009491.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009494.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009496.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009497.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009499.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009500.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009502.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009504.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009507.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009508.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009512.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009515.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009516.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009517.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009518.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009519.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009520.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009523.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009524.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009526.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009527.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009528.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009531.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009532.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009533.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009537.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009540.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009541.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009542.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009543.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009545.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009546.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009549.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009550.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009551.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009557.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009558.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009560.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009562.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009565.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009566.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009567.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009568.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009571.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009573.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009576.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009577.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009579.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009580.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009584.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009585.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009586.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009587.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009588.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009591.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009596.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009597.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009598.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009600.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009603.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009605.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009609.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009611.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009613.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009614.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009615.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009617.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009618.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009619.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009620.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009621.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009623.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009627.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009629.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009634.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009636.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009637.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009638.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009641.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009644.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009647.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009649.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009650.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009654.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009655.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009656.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009658.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009659.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009664.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009666.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009667.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009668.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009670.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009671.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009676.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009678.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009679.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009681.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009684.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009685.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009686.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009687.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009691.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009692.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009693.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009695.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009698.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009699.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009700.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009702.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009703.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009706.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009707.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009709.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009710.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009711.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009712.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009713.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009717.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009718.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009719.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009721.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009724.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009726.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009729.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009732.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009733.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009734.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009735.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009737.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009738.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009743.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009745.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009746.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009747.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009748.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009749.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009754.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009755.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009756.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009758.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009761.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009762.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009763.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009764.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009767.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009772.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009773.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009774.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009776.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009778.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009780.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009781.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009785.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009789.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009790.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009792.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009794.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009796.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009797.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009800.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009801.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009805.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009807.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009808.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009809.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009810.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009813.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009816.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009819.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009822.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009823.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009825.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009828.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009830.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009831.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009832.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009833.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009834.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009836.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009839.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009841.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009842.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009845.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009848.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009851.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009852.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009855.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009858.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009859.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009860.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009862.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009863.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009865.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009867.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009868.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009869.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009870.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009872.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009874.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009877.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009878.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009879.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009880.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009881.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009882.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009884.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009886.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009887.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009894.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009896.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009897.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009898.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009900.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009902.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009904.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009905.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009908.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009911.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009913.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009917.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009918.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009920.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009923.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009926.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009932.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009935.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009938.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009939.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009940.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009942.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009944.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009945.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009946.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009947.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009949.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009950.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009954.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009955.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009958.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009959.jpg
-./VOC/train/VOCdevkit/VOC2007/images/009961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000495.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000861.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000878.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_000999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001334.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001454.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001486.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001495.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001580.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001809.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_001998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002454.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002495.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002555.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002809.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002859.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_002999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003101.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003334.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003580.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003886.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003925.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_003998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004101.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004125.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004396.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004633.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_004998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005101.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005396.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005633.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005818.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005878.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005918.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_005997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006355.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006818.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006925.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_006999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007101.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007334.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007352.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007486.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007693.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007861.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007918.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_007999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008029.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008125.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2008_008773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000029.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000454.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000486.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000886.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000925.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_000998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001355.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001555.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001633.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001693.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001809.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001818.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001861.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_001999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002235.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002352.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002580.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002809.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002918.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002925.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_002999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003125.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003352.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003396.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003454.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003555.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003633.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003818.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_003995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004125.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004334.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004454.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004486.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004580.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004859.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004886.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_004999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2009_005311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000352.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000495.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000564.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000633.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000918.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_000996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001125.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001328.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001355.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001486.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001555.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001580.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001633.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001787.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001846.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001869.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001918.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001923.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_001998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002029.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002356.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002378.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002396.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002580.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002693.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002702.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002905.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002963.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_002995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003024.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003101.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003316.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003355.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003446.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003473.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003495.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003539.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003563.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003574.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003645.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003648.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003737.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003781.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003818.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003859.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003861.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003878.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003915.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003925.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003939.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003955.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_003999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004029.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004092.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004104.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004125.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004140.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004157.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004172.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004191.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004204.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004209.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004225.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004231.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004289.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004296.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004339.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004351.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004352.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004355.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004363.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004367.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004368.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004423.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004431.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004439.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004478.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004486.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004517.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004523.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004545.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004581.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004659.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004660.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004667.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004680.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004686.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004728.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004729.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004773.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004783.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004792.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004797.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004809.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004832.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004844.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004857.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004861.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004878.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004918.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004931.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004963.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_004998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005099.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005101.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005118.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005120.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005155.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005164.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005170.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005199.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005306.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005331.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005340.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005349.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005352.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005353.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005372.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005377.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005401.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005403.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005405.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005415.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005417.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005425.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005437.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005452.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005483.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005493.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005497.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005500.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005506.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005522.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005527.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005540.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005562.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005570.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005576.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005587.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005593.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005595.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005603.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005604.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005615.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005626.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005635.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005654.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005668.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005670.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005672.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005681.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005696.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005721.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005723.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005735.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005736.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005761.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005762.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005777.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005816.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005835.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005836.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005849.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005860.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005865.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005886.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005892.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005894.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005903.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005907.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005936.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005948.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005960.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005968.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005976.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005995.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_005998.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006000.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006026.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006035.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006067.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2010_006086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000007.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000017.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000037.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000051.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000061.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000068.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000070.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000072.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000077.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000083.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000087.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000090.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000094.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000095.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000112.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000122.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000129.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000130.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000165.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000180.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000181.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000195.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000196.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000202.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000206.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000210.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000214.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000219.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000233.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000243.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000249.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000250.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000258.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000267.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000293.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000297.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000299.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000307.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000309.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000314.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000321.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000332.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000338.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000342.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000345.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000361.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000364.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000374.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000376.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000383.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000392.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000396.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000408.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000426.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000427.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000428.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000430.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000438.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000442.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000444.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000445.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000450.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000454.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000465.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000468.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000469.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000472.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000477.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000481.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000485.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000487.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000496.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000499.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000502.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000512.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000513.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000550.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000565.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000569.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000577.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000627.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000630.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000634.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000637.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000646.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000651.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000682.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000683.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000684.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000688.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000690.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000692.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000701.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000703.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000704.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000711.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000718.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000731.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000734.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000743.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000744.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000749.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000758.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000759.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000763.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000768.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000774.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000778.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000788.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000804.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000807.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000809.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000813.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000828.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000829.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000839.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000843.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000848.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000850.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000853.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000859.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000874.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000882.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000888.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000898.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000899.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000909.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000934.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000954.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000957.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000973.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000981.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000990.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000996.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_000999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001001.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001008.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001009.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001014.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001015.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001029.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001032.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001052.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001056.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001058.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001060.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001069.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001071.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001080.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001082.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001084.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001117.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001123.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001126.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001127.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001133.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001136.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001139.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001153.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001161.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001175.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001190.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001198.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001203.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001208.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001217.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001226.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001229.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001257.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001263.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001264.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001266.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001277.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001282.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001283.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001285.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001286.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001287.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001288.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001290.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001302.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001304.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001305.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001310.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001311.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001313.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001315.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001319.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001320.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001323.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001326.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001329.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001333.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001336.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001337.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001344.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001354.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001355.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001360.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001369.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001370.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001373.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001375.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001382.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001390.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001399.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001400.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001404.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001411.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001412.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001416.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001424.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001432.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001434.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001440.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001441.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001449.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001451.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001456.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001466.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001467.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001471.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001475.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001480.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001489.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001501.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001508.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001510.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001518.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001521.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001524.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001525.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001529.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001530.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001534.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001537.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001538.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001541.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001544.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001546.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001547.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001549.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001557.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001572.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001573.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001586.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001591.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001596.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001597.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001599.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001600.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001602.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001607.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001608.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001611.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001613.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001619.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001621.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001622.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001625.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001628.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001632.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001642.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001643.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001647.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001653.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001655.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001663.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001665.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001666.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001669.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001671.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001679.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001689.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001691.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001693.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001695.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001698.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001700.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001705.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001707.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001708.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001710.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001712.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001716.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001720.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001722.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001727.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001732.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001733.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001739.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001741.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001745.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001747.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001753.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001755.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001757.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001764.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001769.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001771.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001785.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001789.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001791.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001793.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001794.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001799.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001800.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001801.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001806.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001815.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001819.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001820.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001822.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001824.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001825.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001827.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001837.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001840.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001845.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001847.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001855.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001856.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001858.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001862.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001866.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001875.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001876.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001877.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001886.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001891.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001893.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001895.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001896.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001901.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001902.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001904.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001906.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001910.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001914.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001919.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001922.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001926.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001928.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001941.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001945.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001946.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001950.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001952.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001959.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001961.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001964.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001972.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001977.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001980.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001982.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001984.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001986.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001989.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_001991.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002004.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002006.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002018.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002021.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002022.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002031.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002033.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002036.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002040.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002042.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002045.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002046.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002053.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002062.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002064.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002075.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002088.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002093.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002096.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002100.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002102.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002105.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002106.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002107.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002108.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002110.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002113.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002116.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002119.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002128.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002131.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002135.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002137.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002142.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002143.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002144.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002147.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002156.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002160.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002173.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002174.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002178.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002179.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002186.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002189.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002193.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002200.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002215.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002218.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002221.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002222.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002224.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002227.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002234.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002237.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002239.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002241.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002245.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002248.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002251.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002252.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002265.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002268.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002270.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002272.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002273.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002276.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002278.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002279.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002280.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002281.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002284.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002291.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002292.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002294.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002295.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002298.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002300.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002301.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002303.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002308.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002312.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002317.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002318.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002322.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002324.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002325.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002327.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002330.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002335.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002341.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002343.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002346.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002347.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002348.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002350.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002357.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002358.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002359.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002362.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002365.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002366.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002371.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002379.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002380.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002381.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002384.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002385.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002386.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002387.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002388.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002389.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002391.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002393.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002394.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002395.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002396.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002397.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002398.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002402.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002406.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002407.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002409.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002410.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002413.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002414.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002418.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002419.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002420.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002421.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002422.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002429.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002433.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002435.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002436.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002443.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002447.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002448.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002453.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002455.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002457.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002458.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002459.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002460.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002461.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002462.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002463.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002464.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002470.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002474.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002476.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002479.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002482.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002484.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002488.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002490.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002491.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002492.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002494.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002495.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002498.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002503.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002504.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002505.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002507.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002509.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002511.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002514.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002515.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002516.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002519.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002520.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002526.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002528.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002531.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002532.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002533.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002535.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002536.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002542.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002543.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002548.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002551.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002552.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002553.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002554.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002555.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002556.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002558.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002559.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002560.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002561.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002566.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002567.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002568.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002571.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002575.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002578.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002579.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002582.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002583.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002584.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002585.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002588.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002589.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002590.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002592.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002594.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002598.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002601.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002605.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002606.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002609.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002610.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002612.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002614.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002616.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002617.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002618.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002620.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002623.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002624.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002629.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002631.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002636.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002638.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002639.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002640.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002641.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002644.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002649.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002650.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002652.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002656.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002657.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002658.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002661.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002662.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002664.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002673.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002674.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002675.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002676.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002677.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002678.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002685.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002687.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002694.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002697.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002699.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002706.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002709.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002713.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002714.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002715.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002717.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002719.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002724.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002725.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002726.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002730.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002738.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002740.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002742.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002746.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002748.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002750.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002751.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002752.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002754.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002756.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002760.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002765.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002766.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002767.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002770.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002772.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002775.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002776.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002779.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002780.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002782.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002784.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002786.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002790.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002795.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002796.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002798.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002802.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002803.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002805.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002808.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002810.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002811.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002812.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002814.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002817.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002818.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002821.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002823.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002826.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002830.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002831.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002833.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002834.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002838.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002841.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002842.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002851.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002852.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002854.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002863.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002864.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002867.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002868.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002870.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002871.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002872.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002873.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002879.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002880.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002881.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002883.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002884.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002885.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002887.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002889.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002890.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002897.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002900.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002908.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002911.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002912.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002913.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002916.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002917.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002920.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002921.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002924.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002925.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002927.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002929.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002930.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002932.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002933.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002935.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002937.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002938.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002940.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002942.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002943.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002944.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002947.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002949.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002951.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002953.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002956.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002958.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002962.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002965.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002966.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002967.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002969.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002970.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002971.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002974.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002975.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002978.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002979.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002983.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002985.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002987.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002988.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002992.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002993.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002994.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002997.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_002999.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003002.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003003.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003005.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003010.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003011.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003012.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003013.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003016.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003019.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003020.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003023.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003025.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003027.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003028.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003029.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003030.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003034.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003038.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003039.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003041.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003043.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003044.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003047.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003048.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003049.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003050.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003054.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003055.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003057.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003059.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003063.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003065.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003066.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003073.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003074.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003076.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003078.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003079.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003081.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003085.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003086.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003089.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003091.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003097.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003098.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003103.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003109.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003111.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003114.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003115.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003121.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003124.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003132.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003134.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003138.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003141.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003145.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003146.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003148.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003149.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003150.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003151.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003152.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003154.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003158.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003159.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003162.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003163.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003166.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003167.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003168.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003169.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003171.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003176.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003177.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003182.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003183.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003184.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003185.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003187.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003188.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003192.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003194.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003197.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003201.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003205.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003207.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003211.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003212.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003213.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003216.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003220.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003223.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003228.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003230.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003232.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003236.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003238.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003240.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003242.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003244.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003246.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003247.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003253.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003254.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003255.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003256.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003259.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003260.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003261.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003262.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003269.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003271.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003274.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003275.jpg
-./VOC/train/VOCdevkit/VOC2012/images/2011_003276.jpg
diff --git a/cv/detection/yolov3/pytorch/data/voc/valid.txt b/cv/detection/yolov3/pytorch/data/voc/valid.txt
deleted file mode 100755
index a541dac5c82780ed9b411cbf1a8b5626467b726b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/data/voc/valid.txt
+++ /dev/null
@@ -1,4952 +0,0 @@
-./VOC/test/VOCdevkit/VOC2007/images/000001.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000002.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000003.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000004.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000006.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000008.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000011.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000015.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000018.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000025.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000027.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000028.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000029.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000031.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000037.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000043.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000045.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000049.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000053.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000054.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000056.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000057.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000058.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000062.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000067.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000068.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000069.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000071.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000074.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000079.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000082.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000084.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000085.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000086.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000088.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000090.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000092.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000094.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000097.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000098.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000100.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000105.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000106.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000108.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000114.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000116.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000124.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000128.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000135.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000136.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000137.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000139.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000144.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000145.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000148.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000149.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000151.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000152.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000160.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000166.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000168.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000172.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000175.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000176.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000178.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000179.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000181.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000183.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000185.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000186.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000191.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000195.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000196.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000197.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000199.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000201.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000202.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000204.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000205.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000212.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000213.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000216.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000223.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000230.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000231.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000234.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000239.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000247.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000248.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000253.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000254.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000255.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000258.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000260.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000261.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000271.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000272.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000273.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000274.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000279.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000280.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000281.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000283.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000284.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000286.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000287.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000290.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000291.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000292.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000293.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000295.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000299.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000300.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000310.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000314.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000315.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000316.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000326.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000327.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000330.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000333.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000335.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000339.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000341.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000345.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000346.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000348.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000350.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000351.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000360.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000361.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000362.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000364.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000365.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000368.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000369.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000371.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000375.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000376.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000377.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000378.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000384.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000385.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000386.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000388.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000389.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000390.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000392.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000393.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000397.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000398.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000405.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000409.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000410.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000413.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000414.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000418.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000421.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000422.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000423.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000425.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000429.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000432.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000434.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000436.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000437.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000441.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000442.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000444.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000445.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000449.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000451.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000452.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000453.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000455.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000456.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000457.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000458.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000465.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000466.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000467.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000472.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000475.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000479.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000481.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000488.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000490.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000493.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000497.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000502.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000506.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000507.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000510.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000512.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000517.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000521.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000527.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000529.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000532.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000533.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000536.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000539.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000542.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000546.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000547.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000548.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000551.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000553.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000556.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000558.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000562.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000566.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000568.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000571.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000572.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000576.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000584.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000585.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000586.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000587.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000593.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000594.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000595.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000600.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000603.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000604.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000606.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000611.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000614.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000615.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000617.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000618.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000621.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000627.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000629.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000631.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000634.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000636.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000638.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000639.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000640.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000643.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000644.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000649.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000655.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000658.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000662.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000664.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000666.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000668.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000669.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000670.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000673.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000674.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000678.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000679.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000683.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000687.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000691.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000692.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000693.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000696.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000697.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000698.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000704.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000706.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000708.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000715.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000718.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000719.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000721.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000722.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000723.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000727.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000732.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000734.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000735.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000741.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000743.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000747.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000749.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000751.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000758.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000759.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000762.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000765.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000766.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000773.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000775.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000779.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000781.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000783.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000784.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000785.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000789.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000790.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000801.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000803.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000807.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000809.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000813.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000817.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000819.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000821.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000824.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000825.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000833.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000835.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000836.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000837.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000838.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000839.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000840.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000841.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000852.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000856.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000858.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000864.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000866.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000869.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000873.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000877.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000881.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000884.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000886.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000890.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000897.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000901.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000905.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000909.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000910.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000913.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000914.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000916.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000924.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000928.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000930.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000932.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000933.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000938.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000939.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000940.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000944.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000945.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000952.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000953.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000956.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000960.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000961.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000963.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000968.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000969.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000970.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000974.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000975.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000976.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000978.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000979.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000981.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000983.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000984.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000985.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000986.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000988.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000990.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000992.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000994.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000995.jpg
-./VOC/test/VOCdevkit/VOC2007/images/000998.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001000.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001003.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001005.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001006.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001007.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001016.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001019.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001020.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001021.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001023.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001025.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001026.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001029.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001031.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001032.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001033.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001034.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001035.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001037.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001039.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001046.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001047.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001048.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001049.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001051.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001054.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001058.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001063.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001065.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001067.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001085.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001086.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001088.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001089.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001090.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001094.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001095.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001098.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001100.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001105.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001108.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001114.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001116.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001117.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001120.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001122.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001123.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001128.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001131.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001132.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001133.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001134.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001135.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001138.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001139.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001141.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001146.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001150.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001153.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001159.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001162.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001163.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001165.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001169.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001173.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001177.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001178.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001179.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001180.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001181.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001183.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001189.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001190.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001193.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001195.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001196.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001197.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001202.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001208.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001210.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001213.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001216.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001218.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001219.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001220.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001222.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001223.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001228.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001242.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001244.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001245.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001246.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001249.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001251.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001253.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001255.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001256.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001257.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001261.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001262.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001271.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001275.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001276.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001278.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001280.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001282.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001283.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001285.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001291.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001295.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001296.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001300.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001302.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001303.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001305.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001306.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001307.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001308.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001318.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001320.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001321.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001322.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001329.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001331.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001335.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001336.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001338.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001339.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001344.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001347.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001349.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001351.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001354.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001355.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001359.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001363.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001367.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001368.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001369.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001370.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001372.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001373.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001374.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001376.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001377.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001379.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001380.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001381.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001382.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001389.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001391.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001392.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001396.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001398.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001403.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001407.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001410.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001411.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001416.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001417.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001419.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001422.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001423.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001424.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001425.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001429.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001433.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001435.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001437.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001438.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001446.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001448.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001449.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001452.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001454.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001456.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001458.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001459.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001461.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001462.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001469.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001474.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001476.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001477.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001482.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001489.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001491.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001496.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001500.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001502.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001503.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001506.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001507.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001508.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001513.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001518.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001519.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001520.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001525.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001527.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001530.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001533.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001535.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001540.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001542.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001546.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001547.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001549.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001550.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001551.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001558.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001562.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001564.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001566.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001568.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001572.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001584.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001585.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001587.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001591.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001592.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001599.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001600.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001601.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001605.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001606.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001609.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001613.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001615.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001619.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001620.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001621.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001625.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001626.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001629.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001631.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001634.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001635.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001637.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001639.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001644.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001645.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001648.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001655.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001656.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001657.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001658.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001660.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001663.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001664.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001666.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001667.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001668.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001670.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001671.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001672.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001674.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001679.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001687.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001692.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001694.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001695.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001696.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001697.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001698.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001700.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001702.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001704.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001705.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001706.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001709.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001710.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001715.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001719.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001722.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001731.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001735.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001740.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001742.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001743.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001748.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001751.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001753.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001760.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001762.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001763.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001764.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001767.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001773.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001774.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001776.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001779.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001781.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001783.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001786.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001790.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001791.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001794.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001796.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001803.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001805.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001808.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001812.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001813.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001814.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001815.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001817.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001819.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001820.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001822.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001823.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001824.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001826.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001831.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001835.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001838.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001839.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001848.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001850.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001852.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001856.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001857.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001859.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001863.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001865.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001866.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001867.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001868.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001869.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001873.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001874.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001876.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001879.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001880.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001884.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001885.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001886.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001889.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001890.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001895.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001897.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001900.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001905.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001908.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001909.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001910.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001912.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001913.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001914.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001916.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001917.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001919.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001921.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001923.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001924.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001926.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001935.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001939.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001943.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001946.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001947.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001949.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001951.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001953.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001956.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001961.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001965.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001966.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001967.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001968.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001969.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001973.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001974.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001975.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001979.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001983.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001984.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001986.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001987.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001988.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001990.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001991.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001992.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001994.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001996.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001997.jpg
-./VOC/test/VOCdevkit/VOC2007/images/001998.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002003.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002005.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002007.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002008.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002009.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002016.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002017.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002018.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002026.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002028.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002029.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002031.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002032.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002033.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002035.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002046.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002048.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002052.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002053.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002057.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002060.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002062.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002065.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002066.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002071.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002072.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002073.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002074.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002077.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002078.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002079.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002084.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002085.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002089.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002092.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002093.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002097.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002100.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002105.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002106.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002107.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002110.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002113.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002121.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002122.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002123.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002128.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002130.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002131.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002133.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002137.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002138.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002141.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002143.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002144.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002147.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002148.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002149.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002150.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002154.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002159.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002160.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002161.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002162.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002164.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002168.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002173.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002175.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002177.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002185.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002189.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002195.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002200.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002203.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002204.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002205.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002210.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002211.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002216.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002222.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002223.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002225.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002229.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002230.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002231.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002236.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002239.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002242.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002245.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002246.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002250.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002254.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002258.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002262.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002269.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002271.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002274.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002275.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002282.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002283.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002286.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002289.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002292.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002294.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002295.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002296.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002298.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002299.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002303.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002304.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002312.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002314.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002316.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002322.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002325.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002326.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002327.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002331.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002336.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002338.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002339.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002341.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002344.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002346.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002349.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002351.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002360.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002363.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002365.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002370.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002379.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002380.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002381.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002386.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002388.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002389.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002390.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002395.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002396.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002397.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002398.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002406.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002408.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002409.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002414.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002416.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002418.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002421.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002422.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002424.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002429.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002430.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002432.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002434.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002438.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002446.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002449.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002451.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002453.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002455.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002457.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002463.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002464.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002467.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002469.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002474.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002475.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002482.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002484.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002486.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002488.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002489.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002498.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002499.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002503.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002506.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002507.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002509.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002510.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002515.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002517.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002521.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002522.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002526.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002527.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002528.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002530.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002531.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002532.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002535.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002536.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002541.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002543.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002548.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002550.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002551.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002553.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002556.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002562.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002568.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002576.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002577.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002582.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002587.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002588.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002591.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002592.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002597.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002601.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002604.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002608.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002610.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002612.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002614.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002617.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002619.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002620.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002622.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002626.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002628.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002629.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002631.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002638.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002639.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002640.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002644.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002654.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002655.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002656.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002660.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002663.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002671.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002672.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002673.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002674.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002676.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002679.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002685.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002686.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002687.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002688.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002692.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002694.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002698.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002700.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002705.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002707.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002708.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002711.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002719.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002726.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002729.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002731.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002733.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002739.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002740.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002742.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002743.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002746.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002748.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002750.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002752.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002753.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002754.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002756.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002758.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002761.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002764.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002768.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002773.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002777.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002780.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002781.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002789.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002790.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002793.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002797.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002799.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002805.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002806.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002808.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002809.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002813.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002814.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002818.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002819.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002821.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002822.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002823.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002824.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002825.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002828.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002830.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002831.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002832.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002837.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002839.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002840.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002843.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002849.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002850.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002852.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002856.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002857.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002860.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002862.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002863.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002865.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002872.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002874.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002876.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002877.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002878.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002882.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002885.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002887.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002888.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002890.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002892.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002895.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002897.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002898.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002900.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002902.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002903.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002905.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002908.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002909.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002911.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002918.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002920.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002921.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002923.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002926.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002928.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002930.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002936.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002945.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002948.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002949.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002950.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002951.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002961.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002964.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002968.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002970.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002971.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002972.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002973.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002974.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002979.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002980.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002981.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002982.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002983.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002985.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002991.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002996.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002997.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002998.jpg
-./VOC/test/VOCdevkit/VOC2007/images/002999.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003001.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003006.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003012.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003016.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003018.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003019.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003020.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003025.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003026.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003029.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003033.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003035.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003036.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003037.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003043.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003046.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003048.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003049.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003052.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003060.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003062.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003067.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003068.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003069.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003071.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003073.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003079.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003084.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003091.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003095.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003097.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003101.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003104.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003109.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003113.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003114.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003123.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003125.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003128.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003130.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003131.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003132.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003136.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003139.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003141.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003143.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003144.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003148.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003151.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003152.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003153.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003156.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003158.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003160.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003166.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003168.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003171.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003172.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003173.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003174.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003179.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003180.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003187.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003190.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003191.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003192.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003193.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003196.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003197.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003201.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003203.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003208.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003209.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003212.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003215.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003220.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003221.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003222.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003224.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003225.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003230.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003234.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003241.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003245.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003246.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003248.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003249.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003251.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003257.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003263.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003266.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003268.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003275.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003276.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003278.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003281.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003283.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003286.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003287.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003288.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003289.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003291.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003295.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003298.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003302.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003304.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003305.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003306.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003310.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003312.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003314.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003315.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003318.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003321.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003322.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003323.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003326.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003329.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003332.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003333.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003334.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003341.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003345.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003346.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003347.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003348.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003352.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003361.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003364.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003368.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003371.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003372.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003375.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003378.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003381.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003384.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003385.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003387.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003388.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003389.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003393.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003405.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003409.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003411.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003414.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003418.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003423.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003427.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003432.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003434.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003437.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003438.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003442.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003445.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003446.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003448.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003454.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003456.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003457.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003459.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003460.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003463.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003467.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003472.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003474.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003475.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003476.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003479.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003480.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003481.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003482.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003483.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003486.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003488.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003490.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003494.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003498.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003501.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003502.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003503.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003507.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003512.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003513.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003514.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003515.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003517.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003520.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003523.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003526.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003527.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003531.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003532.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003533.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003535.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003540.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003541.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003542.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003543.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003544.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003545.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003547.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003553.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003558.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003559.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003562.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003563.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003568.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003571.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003572.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003579.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003582.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003584.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003590.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003591.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003592.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003595.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003598.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003600.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003601.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003610.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003612.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003613.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003615.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003617.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003619.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003626.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003631.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003633.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003637.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003643.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003647.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003649.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003653.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003666.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003668.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003670.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003672.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003676.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003677.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003680.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003682.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003683.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003686.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003687.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003689.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003692.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003693.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003697.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003702.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003707.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003710.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003715.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003718.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003719.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003723.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003726.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003730.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003731.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003733.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003734.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003738.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003739.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003741.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003742.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003746.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003747.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003755.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003756.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003761.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003762.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003764.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003765.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003766.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003768.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003775.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003776.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003777.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003782.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003785.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003789.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003794.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003799.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003800.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003801.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003805.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003810.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003812.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003813.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003815.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003816.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003819.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003822.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003823.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003825.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003831.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003832.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003833.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003836.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003839.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003840.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003841.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003842.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003843.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003850.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003852.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003854.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003858.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003862.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003864.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003867.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003873.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003878.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003880.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003881.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003882.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003884.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003888.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003892.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003896.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003897.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003900.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003901.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003902.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003903.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003906.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003908.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003909.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003910.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003914.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003916.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003917.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003920.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003928.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003930.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003931.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003933.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003934.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003938.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003940.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003943.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003944.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003950.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003951.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003952.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003958.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003962.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003964.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003967.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003968.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003972.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003975.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003976.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003977.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003978.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003980.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003981.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003982.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003985.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003989.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003995.jpg
-./VOC/test/VOCdevkit/VOC2007/images/003999.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004000.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004001.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004002.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004004.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004006.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004007.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004018.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004021.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004024.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004026.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004027.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004029.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004032.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004036.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004042.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004043.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004045.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004048.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004049.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004053.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004054.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004056.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004061.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004062.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004063.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004064.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004065.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004068.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004071.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004072.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004074.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004078.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004079.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004083.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004084.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004086.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004088.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004090.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004094.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004097.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004098.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004101.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004104.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004107.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004109.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004112.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004114.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004116.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004123.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004124.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004125.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004128.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004130.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004132.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004134.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004139.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004144.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004147.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004151.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004153.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004154.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004156.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004159.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004160.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004161.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004162.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004165.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004166.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004172.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004173.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004175.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004176.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004177.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004179.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004180.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004181.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004183.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004184.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004187.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004197.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004199.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004202.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004208.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004210.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004211.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004213.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004214.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004216.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004218.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004219.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004222.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004225.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004233.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004234.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004236.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004245.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004248.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004249.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004250.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004251.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004254.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004260.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004261.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004262.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004266.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004268.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004276.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004278.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004282.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004285.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004288.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004289.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004290.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004294.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004299.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004302.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004305.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004306.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004308.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004311.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004314.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004316.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004320.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004330.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004332.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004334.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004335.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004336.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004337.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004343.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004344.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004348.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004350.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004355.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004362.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004363.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004373.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004374.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004375.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004377.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004378.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004381.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004382.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004385.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004388.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004393.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004395.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004398.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004403.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004406.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004407.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004408.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004410.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004413.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004414.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004416.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004417.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004418.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004419.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004420.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004422.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004425.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004427.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004435.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004442.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004443.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004444.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004445.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004448.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004449.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004451.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004453.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004454.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004456.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004458.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004460.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004461.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004462.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004465.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004467.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004469.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004472.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004475.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004476.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004477.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004480.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004482.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004483.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004486.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004489.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004491.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004492.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004497.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004501.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004503.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004506.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004513.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004515.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004521.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004522.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004523.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004525.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004529.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004531.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004533.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004536.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004541.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004543.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004545.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004546.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004547.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004550.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004554.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004556.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004559.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004564.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004568.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004572.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004577.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004582.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004586.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004590.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004593.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004594.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004598.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004599.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004603.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004608.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004610.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004613.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004614.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004615.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004617.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004619.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004620.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004621.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004629.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004633.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004635.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004637.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004638.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004639.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004640.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004645.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004657.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004658.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004663.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004664.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004666.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004667.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004668.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004669.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004670.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004677.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004678.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004680.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004684.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004688.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004690.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004695.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004696.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004697.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004698.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004700.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004704.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004709.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004711.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004713.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004717.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004721.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004726.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004729.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004730.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004731.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004733.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004734.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004738.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004739.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004740.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004741.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004749.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004751.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004752.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004755.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004756.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004758.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004759.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004762.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004763.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004764.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004765.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004766.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004767.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004772.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004774.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004775.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004780.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004781.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004784.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004791.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004800.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004803.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004806.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004807.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004809.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004810.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004813.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004817.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004819.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004820.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004821.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004822.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004824.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004827.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004833.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004835.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004838.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004843.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004845.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004847.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004854.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004855.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004858.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004860.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004862.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004864.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004865.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004874.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004877.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004880.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004881.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004884.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004887.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004888.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004889.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004892.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004899.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004900.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004901.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004906.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004908.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004909.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004914.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004915.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004917.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004918.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004919.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004920.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004921.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004923.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004924.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004930.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004932.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004933.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004934.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004937.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004940.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004944.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004945.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004947.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004949.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004952.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004964.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004965.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004969.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004970.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004971.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004975.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004978.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004979.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004980.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004981.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004988.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004989.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/004996.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005000.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005002.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005005.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005008.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005009.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005011.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005012.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005015.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005017.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005019.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005021.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005025.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005031.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005034.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005035.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005043.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005046.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005048.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005049.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005051.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005053.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005060.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005066.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005069.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005074.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005082.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005083.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005088.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005089.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005091.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005092.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005095.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005098.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005100.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005105.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005106.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005109.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005112.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005113.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005117.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005120.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005123.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005125.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005132.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005133.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005137.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005139.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005140.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005141.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005142.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005147.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005148.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005149.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005151.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005152.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005154.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005158.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005162.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005163.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005164.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005165.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005166.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005170.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005172.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005174.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005178.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005180.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005184.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005187.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005192.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005193.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005194.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005196.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005197.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005200.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005201.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005204.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005205.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005211.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005213.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005216.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005218.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005221.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005225.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005228.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005233.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005234.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005241.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005247.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005249.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005250.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005251.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005255.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005256.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005261.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005266.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005271.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005272.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005275.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005276.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005279.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005280.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005282.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005284.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005286.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005287.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005289.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005291.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005294.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005295.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005296.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005299.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005300.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005302.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005308.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005316.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005321.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005322.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005323.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005329.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005330.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005332.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005333.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005334.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005335.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005339.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005341.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005347.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005354.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005359.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005361.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005362.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005364.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005372.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005375.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005376.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005377.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005381.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005382.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005386.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005390.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005392.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005403.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005409.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005411.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005422.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005425.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005427.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005432.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005435.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005437.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005442.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005443.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005444.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005446.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005449.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005452.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005456.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005458.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005459.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005460.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005462.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005463.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005464.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005466.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005468.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005472.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005474.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005476.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005477.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005479.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005480.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005482.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005484.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005488.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005490.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005491.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005492.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005493.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005494.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005498.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005500.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005501.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005502.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005503.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005506.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005512.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005513.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005520.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005523.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005525.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005528.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005529.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005532.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005533.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005537.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005540.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005543.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005545.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005546.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005548.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005551.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005553.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005555.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005556.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005558.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005562.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005564.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005565.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005571.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005572.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005587.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005594.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005595.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005597.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005598.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005604.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005610.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005612.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005617.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005619.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005621.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005622.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005626.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005627.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005628.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005632.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005633.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005634.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005635.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005638.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005643.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005649.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005656.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005663.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005666.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005667.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005670.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005671.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005673.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005675.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005677.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005678.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005683.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005684.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005688.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005689.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005690.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005691.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005692.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005694.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005698.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005706.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005707.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005708.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005709.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005711.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005717.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005721.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005722.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005726.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005727.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005733.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005734.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005739.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005746.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005748.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005750.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005751.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005753.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005754.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005758.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005759.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005763.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005766.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005767.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005772.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005774.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005775.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005776.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005777.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005785.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005793.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005797.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005800.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005801.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005807.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005808.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005809.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005810.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005816.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005820.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005822.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005823.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005827.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005832.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005833.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005834.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005835.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005837.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005842.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005847.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005848.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005849.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005855.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005857.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005858.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005862.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005865.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005866.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005869.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005872.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005876.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005880.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005882.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005886.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005887.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005890.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005892.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005896.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005898.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005900.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005902.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005913.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005915.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005916.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005921.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005924.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005926.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005931.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005932.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005933.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005934.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005935.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005936.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005937.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005939.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005943.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005944.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005945.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005946.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005949.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005950.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005953.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005958.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005962.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005965.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005966.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005967.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005969.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005972.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005973.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005974.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005976.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005977.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005978.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005982.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005986.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005987.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005994.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005997.jpg
-./VOC/test/VOCdevkit/VOC2007/images/005999.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006002.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006003.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006006.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006007.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006008.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006015.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006016.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006017.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006019.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006021.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006024.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006031.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006032.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006034.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006036.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006037.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006039.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006047.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006048.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006049.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006051.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006052.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006053.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006054.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006056.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006057.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006060.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006063.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006064.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006068.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006072.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006077.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006082.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006083.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006085.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006086.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006090.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006092.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006093.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006094.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006101.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006102.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006106.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006109.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006110.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006112.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006113.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006114.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006116.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006121.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006122.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006132.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006137.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006138.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006142.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006143.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006144.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006145.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006147.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006149.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006152.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006154.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006160.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006164.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006165.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006168.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006169.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006173.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006178.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006186.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006191.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006192.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006193.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006194.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006195.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006197.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006199.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006200.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006204.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006205.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006211.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006213.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006228.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006231.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006239.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006242.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006244.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006245.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006246.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006248.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006253.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006255.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006256.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006257.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006263.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006266.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006268.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006271.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006273.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006274.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006278.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006280.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006283.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006287.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006288.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006292.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006293.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006294.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006298.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006302.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006303.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006307.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006308.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006310.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006311.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006312.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006315.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006316.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006322.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006326.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006327.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006331.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006332.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006333.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006334.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006336.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006343.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006345.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006347.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006354.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006359.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006360.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006361.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006364.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006365.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006368.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006370.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006372.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006373.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006376.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006378.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006379.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006380.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006384.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006386.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006388.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006389.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006390.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006393.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006397.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006403.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006405.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006406.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006407.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006408.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006410.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006413.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006414.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006416.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006420.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006422.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006423.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006432.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006435.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006439.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006441.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006446.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006451.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006452.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006453.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006454.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006457.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006460.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006461.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006464.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006467.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006469.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006477.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006479.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006481.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006489.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006490.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006491.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006493.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006494.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006496.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006498.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006500.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006502.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006508.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006510.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006513.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006514.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006517.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006518.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006521.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006522.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006525.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006526.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006527.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006528.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006531.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006533.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006535.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006537.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006539.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006540.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006541.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006544.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006545.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006546.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006554.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006555.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006558.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006559.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006563.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006566.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006568.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006571.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006577.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006579.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006582.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006586.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006590.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006591.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006592.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006594.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006598.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006600.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006601.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006604.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006608.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006613.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006614.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006615.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006620.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006629.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006633.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006634.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006639.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006640.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006644.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006649.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006653.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006655.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006656.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006662.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006663.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006669.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006672.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006675.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006676.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006680.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006683.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006685.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006686.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006688.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006691.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006692.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006693.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006700.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006705.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006710.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006711.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006713.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006715.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006717.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006721.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006723.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006729.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006732.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006733.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006741.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006742.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006743.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006746.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006749.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006750.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006752.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006754.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006756.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006758.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006763.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006764.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006767.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006774.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006775.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006776.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006779.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006780.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006785.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006790.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006791.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006793.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006796.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006801.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006807.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006809.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006812.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006815.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006816.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006817.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006818.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006820.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006823.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006826.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006830.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006831.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006832.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006834.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006837.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006843.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006854.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006856.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006857.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006863.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006872.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006873.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006877.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006879.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006881.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006882.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006885.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006888.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006889.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006890.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006895.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006897.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006898.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006901.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006902.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006905.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006906.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006913.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006915.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006920.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006921.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006923.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006926.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006928.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006936.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006937.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006938.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006946.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006951.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006954.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006960.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006961.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006964.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006967.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006969.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006970.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006973.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006974.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006975.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006977.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006978.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006979.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006980.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006982.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006984.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006985.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006986.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006991.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006992.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006996.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006997.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006998.jpg
-./VOC/test/VOCdevkit/VOC2007/images/006999.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007000.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007001.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007005.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007012.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007015.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007017.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007019.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007024.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007026.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007027.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007028.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007032.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007034.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007037.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007043.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007047.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007051.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007053.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007057.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007060.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007061.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007063.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007066.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007067.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007069.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007082.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007083.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007085.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007087.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007091.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007094.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007098.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007102.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007106.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007107.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007110.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007112.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007116.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007120.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007124.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007131.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007134.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007135.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007136.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007137.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007142.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007143.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007145.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007151.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007156.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007158.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007160.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007161.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007164.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007169.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007170.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007171.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007173.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007175.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007176.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007178.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007179.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007181.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007183.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007186.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007190.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007192.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007195.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007196.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007199.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007201.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007202.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007203.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007209.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007218.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007220.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007221.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007225.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007228.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007229.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007231.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007233.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007239.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007242.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007246.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007248.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007251.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007252.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007253.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007254.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007255.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007257.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007262.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007268.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007269.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007272.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007273.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007278.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007281.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007282.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007286.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007287.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007288.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007290.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007291.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007293.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007303.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007304.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007306.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007307.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007310.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007312.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007315.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007316.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007320.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007321.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007326.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007331.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007332.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007333.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007335.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007337.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007338.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007339.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007341.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007342.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007345.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007347.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007348.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007349.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007352.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007354.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007355.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007360.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007362.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007364.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007367.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007368.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007371.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007377.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007378.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007379.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007380.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007382.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007384.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007386.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007387.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007391.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007392.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007393.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007395.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007397.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007403.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007404.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007405.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007406.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007407.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007409.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007418.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007420.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007423.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007429.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007430.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007434.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007441.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007442.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007444.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007450.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007452.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007453.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007455.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007456.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007459.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007462.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007463.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007464.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007469.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007472.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007476.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007488.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007492.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007494.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007496.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007499.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007500.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007501.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007502.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007507.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007508.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007509.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007510.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007512.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007514.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007515.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007518.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007520.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007522.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007529.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007531.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007532.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007539.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007541.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007542.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007545.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007548.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007549.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007550.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007553.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007554.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007556.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007557.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007562.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007564.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007573.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007577.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007582.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007584.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007587.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007588.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007591.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007593.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007595.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007596.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007597.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007598.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007599.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007604.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007608.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007609.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007610.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007613.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007617.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007620.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007625.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007627.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007628.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007632.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007634.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007635.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007636.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007638.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007643.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007644.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007645.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007648.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007658.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007660.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007669.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007674.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007676.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007684.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007686.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007689.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007690.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007693.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007695.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007698.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007700.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007706.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007707.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007708.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007710.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007711.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007714.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007717.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007719.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007722.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007726.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007730.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007733.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007734.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007738.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007739.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007741.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007747.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007750.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007752.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007755.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007756.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007759.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007761.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007764.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007766.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007774.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007780.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007782.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007783.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007784.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007785.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007789.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007794.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007796.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007797.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007800.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007801.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007805.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007806.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007807.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007808.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007816.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007817.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007818.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007822.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007823.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007825.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007827.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007828.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007830.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007832.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007835.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007837.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007839.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007842.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007848.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007849.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007850.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007852.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007858.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007860.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007862.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007866.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007867.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007874.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007879.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007880.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007881.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007882.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007887.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007888.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007892.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007895.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007896.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007903.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007906.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007912.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007913.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007917.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007918.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007930.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007934.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007936.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007937.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007938.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007942.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007944.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007945.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007948.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007949.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007951.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007952.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007955.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007960.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007961.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007962.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007965.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007966.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007967.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007969.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007972.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007973.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007975.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007977.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007978.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007981.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007982.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007983.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007985.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007986.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007988.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007989.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007990.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007992.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007994.jpg
-./VOC/test/VOCdevkit/VOC2007/images/007995.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008000.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008003.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008006.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008007.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008011.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008015.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008016.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008018.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008020.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008021.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008022.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008025.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008027.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008028.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008034.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008035.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008039.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008045.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008046.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008047.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008052.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008054.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008056.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008058.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008059.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008065.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008066.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008071.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008073.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008074.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008077.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008078.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008080.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008088.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008089.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008090.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008092.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008094.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008097.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008099.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008102.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008104.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008109.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008110.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008113.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008114.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008120.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008123.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008124.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008126.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008128.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008129.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008131.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008133.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008134.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008135.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008136.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008143.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008145.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008146.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008147.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008148.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008149.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008152.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008153.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008154.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008155.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008156.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008157.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008158.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008161.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008162.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008165.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008170.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008172.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008176.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008178.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008179.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008181.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008183.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008184.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008185.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008187.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008192.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008193.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008194.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008195.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008196.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008201.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008205.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008210.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008212.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008214.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008215.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008219.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008221.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008227.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008228.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008230.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008231.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008233.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008234.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008238.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008239.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008242.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008245.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008246.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008247.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008249.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008255.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008256.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008257.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008259.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008266.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008270.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008271.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008273.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008274.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008276.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008278.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008283.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008286.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008287.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008288.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008289.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008290.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008291.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008298.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008303.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008304.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008305.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008308.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008309.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008314.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008321.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008324.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008325.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008330.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008331.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008333.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008334.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008337.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008339.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008343.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008344.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008347.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008348.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008350.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008352.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008354.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008358.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008361.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008362.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008363.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008367.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008369.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008371.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008373.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008375.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008377.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008378.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008379.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008380.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008382.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008389.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008392.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008393.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008394.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008395.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008396.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008401.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008404.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008405.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008406.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008407.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008408.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008411.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008412.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008414.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008417.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008418.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008419.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008420.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008421.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008432.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008435.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008436.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008439.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008440.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008441.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008446.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008448.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008451.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008455.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008457.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008458.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008459.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008460.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008463.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008464.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008469.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008474.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008476.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008479.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008480.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008481.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008486.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008488.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008489.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008490.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008491.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008493.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008496.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008497.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008500.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008501.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008504.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008507.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008508.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008510.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008515.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008516.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008520.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008525.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008527.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008528.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008531.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008532.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008537.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008539.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008540.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008543.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008544.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008545.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008546.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008547.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008548.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008551.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008554.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008555.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008560.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008563.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008565.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008566.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008567.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008571.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008577.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008579.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008580.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008590.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008591.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008593.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008594.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008597.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008598.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008599.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008600.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008603.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008605.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008609.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008611.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008613.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008614.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008619.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008622.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008623.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008625.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008626.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008627.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008629.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008631.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008632.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008634.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008637.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008640.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008641.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008643.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008648.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008649.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008650.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008656.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008657.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008658.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008659.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008660.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008662.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008664.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008666.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008668.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008669.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008671.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008672.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008673.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008674.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008675.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008677.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008678.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008679.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008681.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008682.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008684.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008685.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008686.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008689.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008693.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008694.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008696.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008697.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008700.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008703.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008704.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008705.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008707.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008708.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008711.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008712.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008714.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008715.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008719.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008721.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008724.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008726.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008729.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008734.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008735.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008737.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008740.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008743.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008745.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008746.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008751.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008754.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008758.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008761.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008762.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008763.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008765.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008767.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008774.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008777.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008778.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008779.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008780.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008781.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008782.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008785.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008786.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008789.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008791.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008792.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008797.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008800.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008803.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008807.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008808.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008812.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008816.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008818.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008820.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008821.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008824.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008825.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008827.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008828.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008830.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008832.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008834.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008839.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008842.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008845.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008850.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008851.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008852.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008855.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008857.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008860.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008863.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008864.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008866.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008868.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008869.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008870.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008877.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008881.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008882.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008887.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008889.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008894.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008895.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008896.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008897.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008898.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008899.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008901.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008902.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008903.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008904.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008906.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008908.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008910.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008912.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008915.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008916.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008918.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008924.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008928.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008934.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008935.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008937.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008938.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008945.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008946.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008947.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008949.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008950.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008952.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008954.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008956.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008959.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008963.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008964.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008972.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008974.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008977.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008981.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008984.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008986.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008990.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008991.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008992.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008993.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008994.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008996.jpg
-./VOC/test/VOCdevkit/VOC2007/images/008998.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009001.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009003.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009008.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009009.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009010.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009011.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009012.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009013.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009014.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009017.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009021.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009023.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009025.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009026.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009028.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009030.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009031.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009033.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009038.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009040.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009041.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009043.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009044.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009046.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009047.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009050.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009052.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009054.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009055.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009056.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009057.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009061.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009062.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009065.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009067.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009069.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009070.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009071.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009074.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009075.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009076.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009077.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009081.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009082.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009083.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009084.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009088.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009090.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009092.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009093.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009095.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009096.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009097.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009101.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009102.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009103.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009104.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009107.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009109.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009110.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009111.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009115.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009118.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009119.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009120.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009122.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009124.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009125.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009127.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009130.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009132.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009134.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009135.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009137.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009139.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009140.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009142.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009143.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009145.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009146.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009149.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009152.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009154.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009156.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009158.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009164.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009165.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009167.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009169.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009170.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009171.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009172.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009176.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009182.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009183.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009188.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009190.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009198.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009199.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009201.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009203.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009204.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009206.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009207.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009210.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009211.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009216.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009217.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009219.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009220.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009222.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009223.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009225.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009226.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009228.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009229.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009231.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009232.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009233.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009234.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009235.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009237.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009240.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009241.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009243.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009248.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009253.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009256.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009257.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009258.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009260.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009261.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009262.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009263.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009264.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009265.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009266.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009267.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009274.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009275.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009276.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009277.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009280.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009284.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009292.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009293.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009294.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009297.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009298.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009300.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009301.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009302.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009304.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009305.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009310.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009311.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009313.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009314.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009317.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009319.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009320.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009321.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009322.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009328.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009329.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009332.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009335.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009338.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009340.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009341.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009344.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009345.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009346.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009352.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009353.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009355.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009356.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009357.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009360.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009361.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009363.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009364.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009366.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009367.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009369.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009370.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009372.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009376.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009379.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009380.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009381.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009383.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009384.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009385.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009387.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009390.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009391.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009395.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009396.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009397.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009399.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009400.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009402.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009403.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009404.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009415.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009416.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009423.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009425.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009426.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009427.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009428.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009430.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009431.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009435.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009436.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009441.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009442.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009444.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009447.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009449.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009450.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009451.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009452.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009453.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009462.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009467.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009471.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009473.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009474.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009475.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009478.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009482.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009483.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009485.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009486.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009487.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009489.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009492.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009493.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009495.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009498.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009501.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009503.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009505.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009506.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009509.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009510.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009511.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009513.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009514.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009521.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009522.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009525.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009529.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009530.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009534.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009535.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009536.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009538.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009539.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009544.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009547.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009548.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009552.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009553.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009554.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009555.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009556.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009559.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009561.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009563.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009564.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009569.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009570.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009572.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009574.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009575.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009578.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009581.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009582.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009583.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009589.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009590.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009592.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009593.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009594.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009595.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009599.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009601.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009602.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009604.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009606.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009607.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009608.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009610.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009612.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009616.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009622.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009624.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009625.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009626.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009628.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009630.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009631.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009632.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009633.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009635.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009639.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009640.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009642.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009643.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009645.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009646.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009648.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009651.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009652.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009653.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009657.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009660.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009661.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009662.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009663.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009665.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009669.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009672.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009673.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009674.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009675.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009677.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009680.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009682.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009683.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009688.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009689.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009690.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009694.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009696.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009697.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009701.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009704.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009705.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009708.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009714.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009715.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009716.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009720.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009722.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009723.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009725.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009727.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009728.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009730.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009731.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009736.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009739.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009740.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009741.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009742.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009744.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009750.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009751.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009752.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009753.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009757.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009759.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009760.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009765.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009766.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009768.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009769.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009770.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009771.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009775.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009777.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009779.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009782.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009783.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009784.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009786.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009787.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009788.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009791.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009793.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009795.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009798.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009799.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009802.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009803.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009804.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009806.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009811.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009812.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009814.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009815.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009817.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009818.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009820.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009821.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009824.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009826.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009827.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009829.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009835.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009837.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009838.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009840.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009843.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009844.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009846.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009847.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009849.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009850.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009853.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009854.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009856.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009857.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009861.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009864.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009866.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009871.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009873.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009875.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009876.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009883.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009885.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009888.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009889.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009890.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009891.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009892.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009893.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009895.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009899.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009901.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009903.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009906.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009907.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009909.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009910.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009912.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009914.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009915.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009916.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009919.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009921.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009922.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009924.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009925.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009927.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009928.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009929.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009930.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009931.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009933.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009934.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009936.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009937.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009941.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009943.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009948.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009951.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009952.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009953.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009956.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009957.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009960.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009962.jpg
-./VOC/test/VOCdevkit/VOC2007/images/009963.jpg
diff --git a/cv/detection/yolov3/pytorch/get_num_devices.sh b/cv/detection/yolov3/pytorch/get_num_devices.sh
deleted file mode 100644
index e28edae741e3014606c4c0eef2b78a22223b2418..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/get_num_devices.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-
-devices=$CUDA_VISIBLE_DEVICES
-if [ -n "$devices" ]; then
- _devices=(${devices//,/ })
- num_devices=${#_devices[@]}
-else
- num_devices=2
- export CUDA_VISIBLE_DEVICES=0,1
- echo "Not found CUDA_VISIBLE_DEVICES, set nproc_per_node = ${num_devices}"
-fi
-export IX_NUM_CUDA_VISIBLE_DEVICES=${num_devices}
\ No newline at end of file
diff --git a/cv/detection/yolov3/pytorch/poetry.lock b/cv/detection/yolov3/pytorch/poetry.lock
deleted file mode 100644
index 18bfa8073c9e39c5af074e2bc4ec9f9f6871f2de..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/poetry.lock
+++ /dev/null
@@ -1,1137 +0,0 @@
-[[package]]
-name = "absl-py"
-version = "0.12.0"
-description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-six = "*"
-
-[[package]]
-name = "cachetools"
-version = "3.1.1"
-description = "Extensible memoizing collections and decorators"
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "certifi"
-version = "2020.12.5"
-description = "Python package for providing Mozilla's CA Bundle."
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "chardet"
-version = "4.0.0"
-description = "Universal encoding detector for Python 2 and 3"
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
-
-[[package]]
-name = "cycler"
-version = "0.10.0"
-description = "Composable style cycles"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-six = "*"
-
-[[package]]
-name = "dataclasses"
-version = "0.8"
-description = "A backport of the dataclasses module for Python 3.6"
-category = "main"
-optional = false
-python-versions = ">=3.6, <3.7"
-
-[[package]]
-name = "decorator"
-version = "4.4.2"
-description = "Decorators for Humans"
-category = "main"
-optional = false
-python-versions = ">=2.6, !=3.0.*, !=3.1.*"
-
-[[package]]
-name = "google-auth"
-version = "1.29.0"
-description = "Google Authentication Library"
-category = "main"
-optional = false
-python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
-
-[package.dependencies]
-cachetools = ">=2.0.0,<5.0"
-pyasn1-modules = ">=0.2.1"
-rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
-six = ">=1.9.0"
-
-[package.extras]
-aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)"]
-pyopenssl = ["pyopenssl (>=20.0.0)"]
-reauth = ["pyu2f (>=0.1.5)"]
-
-[[package]]
-name = "google-auth-oauthlib"
-version = "0.4.4"
-description = "Google Authentication Library"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-google-auth = ">=1.0.0"
-requests-oauthlib = ">=0.7.0"
-
-[package.extras]
-tool = ["click (>=6.0.0)"]
-
-[[package]]
-name = "grpcio"
-version = "1.37.0"
-description = "HTTP/2-based RPC framework"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-six = ">=1.5.2"
-
-[package.extras]
-protobuf = ["grpcio-tools (>=1.37.0)"]
-
-[[package]]
-name = "idna"
-version = "2.10"
-description = "Internationalized Domain Names in Applications (IDNA)"
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
-
-[[package]]
-name = "imageio"
-version = "2.9.0"
-description = "Library for reading and writing a wide range of image, video, scientific, and volumetric data formats."
-category = "main"
-optional = false
-python-versions = ">=3.5"
-
-[package.dependencies]
-numpy = "*"
-pillow = "*"
-
-[package.extras]
-ffmpeg = ["imageio-ffmpeg"]
-fits = ["astropy"]
-full = ["astropy", "gdal", "imageio-ffmpeg", "itk"]
-gdal = ["gdal"]
-itk = ["itk"]
-
-[[package]]
-name = "imgaug"
-version = "0.4.0"
-description = "Image augmentation library for deep neural networks"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-imageio = "*"
-matplotlib = "*"
-numpy = ">=1.15"
-opencv-python = "*"
-Pillow = "*"
-scikit-image = ">=0.14.2"
-scipy = "*"
-Shapely = "*"
-six = "*"
-
-[[package]]
-name = "importlib-metadata"
-version = "4.0.1"
-description = "Read metadata from Python packages"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""}
-zipp = ">=0.5"
-
-[package.extras]
-docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
-testing = ["pytest (>=4.6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"]
-
-[[package]]
-name = "kiwisolver"
-version = "1.3.1"
-description = "A fast implementation of the Cassowary constraint solver"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[[package]]
-name = "markdown"
-version = "3.3.4"
-description = "Python implementation of Markdown."
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
-
-[package.extras]
-testing = ["coverage", "pyyaml"]
-
-[[package]]
-name = "matplotlib"
-version = "3.3.4"
-description = "Python plotting package"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-cycler = ">=0.10"
-kiwisolver = ">=1.0.1"
-numpy = ">=1.15"
-pillow = ">=6.2.0"
-pyparsing = ">=2.0.3,<2.0.4 || >2.0.4,<2.1.2 || >2.1.2,<2.1.6 || >2.1.6"
-python-dateutil = ">=2.1"
-
-[[package]]
-name = "networkx"
-version = "2.5.1"
-description = "Python package for creating and manipulating graphs and networks"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-decorator = ">=4.3,<5"
-
-[package.extras]
-all = ["numpy", "scipy", "pandas", "matplotlib", "pygraphviz", "pydot", "pyyaml", "lxml", "pytest"]
-gdal = ["gdal"]
-lxml = ["lxml"]
-matplotlib = ["matplotlib"]
-numpy = ["numpy"]
-pandas = ["pandas"]
-pydot = ["pydot"]
-pygraphviz = ["pygraphviz"]
-pytest = ["pytest"]
-pyyaml = ["pyyaml"]
-scipy = ["scipy"]
-
-[[package]]
-name = "numpy"
-version = "1.19.5"
-description = "NumPy is the fundamental package for array computing with Python."
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[[package]]
-name = "oauthlib"
-version = "3.1.0"
-description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
-
-[package.extras]
-rsa = ["cryptography"]
-signals = ["blinker"]
-signedtoken = ["cryptography", "pyjwt (>=1.0.0)"]
-
-[[package]]
-name = "opencv-python"
-version = "4.5.1.48"
-description = "Wrapper package for OpenCV python bindings."
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-numpy = ">=1.19.3"
-
-[[package]]
-name = "pillow"
-version = "8.2.0"
-description = "Python Imaging Library (Fork)"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[[package]]
-name = "profilehooks"
-version = "1.12.0"
-description = "Decorators for profiling/timing/tracing individual functions"
-category = "dev"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "protobuf"
-version = "3.15.8"
-description = "Protocol Buffers"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-six = ">=1.9"
-
-[[package]]
-name = "pyasn1"
-version = "0.4.8"
-description = "ASN.1 types and codecs"
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "pyasn1-modules"
-version = "0.2.8"
-description = "A collection of ASN.1-based protocols modules."
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-pyasn1 = ">=0.4.6,<0.5.0"
-
-[[package]]
-name = "pyparsing"
-version = "2.4.7"
-description = "Python parsing module"
-category = "main"
-optional = false
-python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
-
-[[package]]
-name = "python-dateutil"
-version = "2.8.1"
-description = "Extensions to the standard Python datetime module"
-category = "main"
-optional = false
-python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
-
-[package.dependencies]
-six = ">=1.5"
-
-[[package]]
-name = "pywavelets"
-version = "1.1.1"
-description = "PyWavelets, wavelet transform module"
-category = "main"
-optional = false
-python-versions = ">=3.5"
-
-[package.dependencies]
-numpy = ">=1.13.3"
-
-[[package]]
-name = "requests"
-version = "2.25.1"
-description = "Python HTTP for Humans."
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
-
-[package.dependencies]
-certifi = ">=2017.4.17"
-chardet = ">=3.0.2,<5"
-idna = ">=2.5,<3"
-urllib3 = ">=1.21.1,<1.27"
-
-[package.extras]
-security = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)"]
-socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"]
-
-[[package]]
-name = "requests-oauthlib"
-version = "1.3.0"
-description = "OAuthlib authentication support for Requests."
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
-
-[package.dependencies]
-oauthlib = ">=3.0.0"
-requests = ">=2.0.0"
-
-[package.extras]
-rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
-
-[[package]]
-name = "rope"
-version = "0.19.0"
-description = "a python refactoring library..."
-category = "dev"
-optional = false
-python-versions = "*"
-
-[package.extras]
-dev = ["pytest"]
-
-[[package]]
-name = "rsa"
-version = "4.4"
-description = "Pure-Python RSA implementation"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-pyasn1 = ">=0.1.3"
-
-[[package]]
-name = "scikit-image"
-version = "0.17.2"
-description = "Image processing in Python"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-imageio = ">=2.3.0"
-matplotlib = ">=2.0.0,<3.0.0 || >3.0.0"
-networkx = ">=2.0"
-numpy = ">=1.15.1"
-pillow = ">=4.3.0,<7.1.0 || >7.1.0,<7.1.1 || >7.1.1"
-PyWavelets = ">=1.1.1"
-scipy = ">=1.0.1"
-tifffile = ">=2019.7.26"
-
-[package.extras]
-docs = ["sphinx (>=1.8,<=2.4.4)", "numpydoc (>=0.9)", "sphinx-gallery (>=0.3.1)", "sphinx-copybutton", "pytest-runner", "scikit-learn", "matplotlib (>=3.0.1)", "dask[array] (>=0.15.0)", "cloudpickle (>=0.2.1)", "pandas (>=0.23.0)", "seaborn (>=0.7.1)", "pooch (>=0.5.2)"]
-optional = ["simpleitk", "astropy (>=1.2.0)", "qtpy", "pyamg", "dask[array] (>=0.15.0)", "cloudpickle (>=0.2.1)", "pooch (>=0.5.2)"]
-test = ["pytest (!=3.7.3)", "pytest-cov", "pytest-localserver", "flake8", "codecov"]
-
-[[package]]
-name = "scipy"
-version = "1.5.4"
-description = "SciPy: Scientific Library for Python"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-numpy = ">=1.14.5"
-
-[[package]]
-name = "shapely"
-version = "1.7.1"
-description = "Geometric objects, predicates, and operations"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.extras]
-all = ["numpy", "pytest", "pytest-cov"]
-test = ["pytest", "pytest-cov"]
-vectorized = ["numpy"]
-
-[[package]]
-name = "six"
-version = "1.15.0"
-description = "Python 2 and 3 compatibility utilities"
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
-
-[[package]]
-name = "tensorboard"
-version = "2.5.0"
-description = "TensorBoard lets you watch Tensors Flow"
-category = "main"
-optional = false
-python-versions = ">= 2.7, != 3.0.*, != 3.1.*"
-
-[package.dependencies]
-absl-py = ">=0.4"
-google-auth = ">=1.6.3,<2"
-google-auth-oauthlib = ">=0.4.1,<0.5"
-grpcio = ">=1.24.3"
-markdown = ">=2.6.8"
-numpy = ">=1.12.0"
-protobuf = ">=3.6.0"
-requests = ">=2.21.0,<3"
-tensorboard-data-server = ">=0.6.0,<0.7.0"
-tensorboard-plugin-wit = ">=1.6.0"
-werkzeug = ">=0.11.15"
-
-[[package]]
-name = "tensorboard-data-server"
-version = "0.6.0"
-description = "Fast data loading for TensorBoard"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[[package]]
-name = "tensorboard-plugin-wit"
-version = "1.8.0"
-description = "What-If Tool TensorBoard plugin."
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "terminaltables"
-version = "3.1.0"
-description = "Generate simple tables in terminals from a nested list of strings."
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "tifffile"
-version = "2020.9.3"
-description = "Read and write TIFF(r) files"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.dependencies]
-numpy = ">=1.15.1"
-
-[package.extras]
-all = ["imagecodecs (>=2020.2.18)", "matplotlib (>=3.1)", "lxml"]
-
-[[package]]
-name = "torch"
-version = "1.7.1"
-description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
-category = "main"
-optional = false
-python-versions = ">=3.6.2"
-
-[package.dependencies]
-dataclasses = {version = "*", markers = "python_version < \"3.7\""}
-numpy = "*"
-typing-extensions = "*"
-
-[[package]]
-name = "torchsummary"
-version = "1.5.1"
-description = "Model summary in PyTorch similar to `model.summary()` in Keras"
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "torchvision"
-version = "0.8.2"
-description = "image and video datasets and models for torch deep learning"
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.dependencies]
-numpy = "*"
-pillow = ">=4.1.1"
-torch = "1.7.1"
-
-[package.extras]
-scipy = ["scipy"]
-
-[[package]]
-name = "tqdm"
-version = "4.61.1"
-description = "Fast, Extensible Progress Meter"
-category = "main"
-optional = false
-python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
-
-[package.extras]
-dev = ["py-make (>=0.1.0)", "twine", "wheel"]
-notebook = ["ipywidgets (>=6)"]
-telegram = ["requests"]
-
-[[package]]
-name = "typing-extensions"
-version = "3.7.4.3"
-description = "Backported and Experimental Type Hints for Python 3.5+"
-category = "main"
-optional = false
-python-versions = "*"
-
-[[package]]
-name = "urllib3"
-version = "1.22"
-description = "HTTP library with thread-safe connection pooling, file post, and more."
-category = "main"
-optional = false
-python-versions = "*"
-
-[package.extras]
-secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
-socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
-
-[[package]]
-name = "werkzeug"
-version = "1.0.1"
-description = "The comprehensive WSGI web application library."
-category = "main"
-optional = false
-python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
-
-[package.extras]
-dev = ["pytest", "pytest-timeout", "coverage", "tox", "sphinx", "pallets-sphinx-themes", "sphinx-issues"]
-watchdog = ["watchdog"]
-
-[[package]]
-name = "zipp"
-version = "3.4.1"
-description = "Backport of pathlib-compatible object wrapper for zip files"
-category = "main"
-optional = false
-python-versions = ">=3.6"
-
-[package.extras]
-docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
-testing = ["pytest (>=4.6)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pytest-cov", "pytest-enabler", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"]
-
-[metadata]
-lock-version = "1.1"
-python-versions = ">=3.6.2"
-content-hash = "0ae76b71dc01d634182477b444416ade23527cb1837dcd5a54061e944af04a24"
-
-[metadata.files]
-absl-py = [
- {file = "absl-py-0.12.0.tar.gz", hash = "sha256:b44f68984a5ceb2607d135a615999b93924c771238a63920d17d3387b0d229d5"},
- {file = "absl_py-0.12.0-py3-none-any.whl", hash = "sha256:afe94e3c751ff81aad55d33ab6e630390da32780110b5af72ae81ecff8418d9e"},
-]
-cachetools = [
- {file = "cachetools-3.1.1-py2.py3-none-any.whl", hash = "sha256:428266a1c0d36dc5aca63a2d7c5942e88c2c898d72139fca0e97fdd2380517ae"},
- {file = "cachetools-3.1.1.tar.gz", hash = "sha256:8ea2d3ce97850f31e4a08b0e2b5e6c34997d7216a9d2c98e0f3978630d4da69a"},
-]
-certifi = [
- {file = "certifi-2020.12.5-py2.py3-none-any.whl", hash = "sha256:719a74fb9e33b9bd44cc7f3a8d94bc35e4049deebe19ba7d8e108280cfd59830"},
- {file = "certifi-2020.12.5.tar.gz", hash = "sha256:1a4995114262bffbc2413b159f2a1a480c969de6e6eb13ee966d470af86af59c"},
-]
-chardet = [
- {file = "chardet-4.0.0-py2.py3-none-any.whl", hash = "sha256:f864054d66fd9118f2e67044ac8981a54775ec5b67aed0441892edb553d21da5"},
- {file = "chardet-4.0.0.tar.gz", hash = "sha256:0d6f53a15db4120f2b08c94f11e7d93d2c911ee118b6b30a04ec3ee8310179fa"},
-]
-cycler = [
- {file = "cycler-0.10.0-py2.py3-none-any.whl", hash = "sha256:1d8a5ae1ff6c5cf9b93e8811e581232ad8920aeec647c37316ceac982b08cb2d"},
- {file = "cycler-0.10.0.tar.gz", hash = "sha256:cd7b2d1018258d7247a71425e9f26463dfb444d411c39569972f4ce586b0c9d8"},
-]
-dataclasses = [
- {file = "dataclasses-0.8-py3-none-any.whl", hash = "sha256:0201d89fa866f68c8ebd9d08ee6ff50c0b255f8ec63a71c16fda7af82bb887bf"},
- {file = "dataclasses-0.8.tar.gz", hash = "sha256:8479067f342acf957dc82ec415d355ab5edb7e7646b90dc6e2fd1d96ad084c97"},
-]
-decorator = [
- {file = "decorator-4.4.2-py2.py3-none-any.whl", hash = "sha256:41fa54c2a0cc4ba648be4fd43cff00aedf5b9465c9bf18d64325bc225f08f760"},
- {file = "decorator-4.4.2.tar.gz", hash = "sha256:e3a62f0520172440ca0dcc823749319382e377f37f140a0b99ef45fecb84bfe7"},
-]
-google-auth = [
- {file = "google-auth-1.29.0.tar.gz", hash = "sha256:010f011c4e27d3d5eb01106fba6aac39d164842dfcd8709955c4638f5b11ccf8"},
- {file = "google_auth-1.29.0-py2.py3-none-any.whl", hash = "sha256:f30a672a64d91cc2e3137765d088c5deec26416246f7a9e956eaf69a8d7ed49c"},
-]
-google-auth-oauthlib = [
- {file = "google-auth-oauthlib-0.4.4.tar.gz", hash = "sha256:09832c6e75032f93818edf1affe4746121d640c625a5bef9b5c96af676e98eee"},
- {file = "google_auth_oauthlib-0.4.4-py2.py3-none-any.whl", hash = "sha256:0e92aacacfb94978de3b7972cf4b0f204c3cd206f74ddd0dc0b31e91164e6317"},
-]
-grpcio = [
- {file = "grpcio-1.37.0-cp27-cp27m-macosx_10_10_x86_64.whl", hash = "sha256:8a0517e7a6784439a3730e50597bd64debf776692adea3c18f869a36454952e1"},
- {file = "grpcio-1.37.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:96ca74522bcd979856d359fcca3128f760c69885d264dc22044fd1a468e0eb68"},
- {file = "grpcio-1.37.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3da2b0b8afe3ef34c9e2f90329b1f170fc50db5c4d0bbe986946caa659e5ed17"},
- {file = "grpcio-1.37.0-cp27-cp27m-win32.whl", hash = "sha256:0634cd805c6725ab71bebaf3370da0e5d32339c26eb1b6ad0f73d64224e19ddf"},
- {file = "grpcio-1.37.0-cp27-cp27m-win_amd64.whl", hash = "sha256:fe14c86c58190463f6e714637bba366874ca1e518ff1f82723d90765e6e39288"},
- {file = "grpcio-1.37.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:14d7a15030a3f72cfd16dde8018d9f0e29e3f52cb566506dc942220b69b65de8"},
- {file = "grpcio-1.37.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:9d389f4e008edbd91082baff37507bbf4b25afd6c239c8070071f8936466a374"},
- {file = "grpcio-1.37.0-cp35-cp35m-macosx_10_10_intel.whl", hash = "sha256:a8b0914e6ac8987b8f59fcfb79519c5ce8df279b19d1c88bda2fc6e147821217"},
- {file = "grpcio-1.37.0-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:aaf44d496fe53ca1414677cab73b7935d01006f0b8ab4a32ab18704643a80ab5"},
- {file = "grpcio-1.37.0-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:fb6588a47d096cdaa0815d108b714d3e273361bfe03bc47725ddb1fdeaa56061"},
- {file = "grpcio-1.37.0-cp35-cp35m-manylinux2014_i686.whl", hash = "sha256:9b872b6c8ab618caa9bdee871c51021c7cc4890c141e7ee7bb6b923174bb299a"},
- {file = "grpcio-1.37.0-cp35-cp35m-manylinux2014_x86_64.whl", hash = "sha256:810d488804291f22cb696692cfddf75b12bbc9d34beca0159d99103286ac0091"},
- {file = "grpcio-1.37.0-cp35-cp35m-win32.whl", hash = "sha256:55fbdb9a2f81b28bd15af5c6e6669a2c8bb0bdb2add74c8818f9593a7428a164"},
- {file = "grpcio-1.37.0-cp35-cp35m-win_amd64.whl", hash = "sha256:fa6cfecbafbab8c4a229c42787b02cf58d0f128ad43c27b89c4df603b66d7f3c"},
- {file = "grpcio-1.37.0-cp36-cp36m-linux_armv7l.whl", hash = "sha256:b36eeb8a29f214f876ddda563990267a8b35d0a6da587edfa97effa4cdf6e5bd"},
- {file = "grpcio-1.37.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:a89b5d2f64d588b46a8b77c04ada4c68ee1cfd0b7a148ff9108d72eefdc9b363"},
- {file = "grpcio-1.37.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:e0169f550dc9ba88da0bb60b8198437d9bd0e8600d600e3569cd3ba7d2ce0bc7"},
- {file = "grpcio-1.37.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:4408b2732fdf93f735ecb059193219528981d27483feaa822970226d5c66c143"},
- {file = "grpcio-1.37.0-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:5784d1e4877345efb6655f6851809441478769558565d8291a54e1bd3f19548b"},
- {file = "grpcio-1.37.0-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:96e3d85eb63d144656611eef4683f5b4003e1deec93bc2d6cbc5cf330f275a7e"},
- {file = "grpcio-1.37.0-cp36-cp36m-win32.whl", hash = "sha256:e1a5322d63346afdda8ad7ff8cf9933a0ab029546395eae31af7cd27ef75e47b"},
- {file = "grpcio-1.37.0-cp36-cp36m-win_amd64.whl", hash = "sha256:5e11b7176e7c14675868b7c46b7aa2da0b184cf7c189348f3ad7c98829de07be"},
- {file = "grpcio-1.37.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:6c2798eaef4eebcf3f9d62b49652bc1110787c684861605d20fec842580f6cee"},
- {file = "grpcio-1.37.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:3e541240650f9173b4891f3e252234976199e487b9bd771e4f082403db50130d"},
- {file = "grpcio-1.37.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:b4f3ddfed733264c4f6431302e5fbafdd9c03f166b98b04d16a058fae3101a5d"},
- {file = "grpcio-1.37.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:f16e40ea37600fe21b51651617867c46d26dcb3f25a5912b7e61c7199b3f5a9f"},
- {file = "grpcio-1.37.0-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b897b825fb464c940001a2cc1d631f418f5b071ccff64647148dbf99c775b98b"},
- {file = "grpcio-1.37.0-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:5e598af1d64ece6a91797b2dcacaf2d537ffb1c0075ecd184c62976068ce1f09"},
- {file = "grpcio-1.37.0-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:1a167d39b1db6e1b29653d69938ff79936602e95863db897ff9eeab81366b304"},
- {file = "grpcio-1.37.0-cp37-cp37m-win32.whl", hash = "sha256:c4f71341c20327bda9f8c28c35d1475af335bb27e591e7f6409d493b49e06223"},
- {file = "grpcio-1.37.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e86acc1462bc796df672568492d24c6b4e7692e3f58b873d56b215dc65553ae1"},
- {file = "grpcio-1.37.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:28f94700775ceca8820fa2c141501ec713e821de7362b966f8d7bf4d8e1eb93a"},
- {file = "grpcio-1.37.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:ca5c96c61289c001b9bcd607dcc1df3060eb8cc13088baf8a6e13268e4879a1f"},
- {file = "grpcio-1.37.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:06cae65dc4557a445748092a61f2adb425ee472088a7e39826369f1f0ae9ffea"},
- {file = "grpcio-1.37.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:6986d58240addd69e001e2e0e97c4b198370dd575162ab4bb1e3ea3816103e75"},
- {file = "grpcio-1.37.0-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:606f0bbfac3860cb6f23f8ebabb974c14db8797317a86d6df063b132f64318f9"},
- {file = "grpcio-1.37.0-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:1c611a4d137a40f8a6803933dd77ab43f04cc54c27fb0e07483fd37b70e7dae6"},
- {file = "grpcio-1.37.0-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:3acfb47d930daec7127a7bc27a7e9c1c276d5e4ae3d2b04a4c7a33432712c811"},
- {file = "grpcio-1.37.0-cp38-cp38-win32.whl", hash = "sha256:575b49cbdd7286df9f77451709060a4a311a9c8767e89cf4e28d3b3200893de4"},
- {file = "grpcio-1.37.0-cp38-cp38-win_amd64.whl", hash = "sha256:04582b260ff0c953011819b1964e875139a7a43adb84621d3ab57f66d0f3d04e"},
- {file = "grpcio-1.37.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:00f0acc463d9e6b1e74e71ce516c8cabd053619d08dd81765eb573492811de54"},
- {file = "grpcio-1.37.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:4eb3907fda03eda8bdb7d666f5371b6500a9054f355a547961da1ee231d2d6aa"},
- {file = "grpcio-1.37.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:3eecf543aa66f7d8304f82854132df6116476279a8e3ba0665c5d93f1ef622de"},
- {file = "grpcio-1.37.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:91f91388e6f72a5d15161124458ad62387470f3a0a16b488db169232f79dd4d2"},
- {file = "grpcio-1.37.0-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:efb928f1a3fd5889b9045c323077d2696937cf9cdb7d2e60b90caa7da5bd1ce9"},
- {file = "grpcio-1.37.0-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:93d990885d392f564ef95a97e0d6936cb09ee404418e8c986835a4d1786b882d"},
- {file = "grpcio-1.37.0-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:ebbb2796ec138cb56373f328f5046ccb9e591046cd8aaccbb8af5bfc397d8b53"},
- {file = "grpcio-1.37.0-cp39-cp39-win32.whl", hash = "sha256:adfef1a3994220bd39e5e2dd57714ca94c4c38c9015f2812a0b09b39f86ddbe0"},
- {file = "grpcio-1.37.0-cp39-cp39-win_amd64.whl", hash = "sha256:df142d51d7de3f8d13aaa78f7ddc7d74088226f92ec5aae8d98d8ae5d328f74b"},
- {file = "grpcio-1.37.0.tar.gz", hash = "sha256:b3ce16aa91569760fdabd77ca901b2288152eb16941d28edd9a3a75a0c4a8a85"},
-]
-idna = [
- {file = "idna-2.10-py2.py3-none-any.whl", hash = "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0"},
- {file = "idna-2.10.tar.gz", hash = "sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6"},
-]
-imageio = [
- {file = "imageio-2.9.0-py3-none-any.whl", hash = "sha256:3604d751f03002e8e0e7650aa71d8d9148144a87daf17cb1f3228e80747f2e6b"},
- {file = "imageio-2.9.0.tar.gz", hash = "sha256:52ddbaeca2dccf53ba2d6dec5676ca7bc3b2403ef8b37f7da78b7654bb3e10f0"},
-]
-imgaug = [
- {file = "imgaug-0.4.0-py2.py3-none-any.whl", hash = "sha256:ce61e65b4eb7405fc62c1b0a79d2fa92fd47f763aaecb65152d29243592111f9"},
- {file = "imgaug-0.4.0.tar.gz", hash = "sha256:46bab63ed38f8980630ff721a09ca2281b7dbd4d8c11258818b6ebcc69ea46c7"},
-]
-importlib-metadata = [
- {file = "importlib_metadata-4.0.1-py3-none-any.whl", hash = "sha256:d7eb1dea6d6a6086f8be21784cc9e3bcfa55872b52309bc5fad53a8ea444465d"},
- {file = "importlib_metadata-4.0.1.tar.gz", hash = "sha256:8c501196e49fb9df5df43833bdb1e4328f64847763ec8a50703148b73784d581"},
-]
-kiwisolver = [
- {file = "kiwisolver-1.3.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:fd34fbbfbc40628200730bc1febe30631347103fc8d3d4fa012c21ab9c11eca9"},
- {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:d3155d828dec1d43283bd24d3d3e0d9c7c350cdfcc0bd06c0ad1209c1bbc36d0"},
- {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:5a7a7dbff17e66fac9142ae2ecafb719393aaee6a3768c9de2fd425c63b53e21"},
- {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:f8d6f8db88049a699817fd9178782867bf22283e3813064302ac59f61d95be05"},
- {file = "kiwisolver-1.3.1-cp36-cp36m-manylinux2014_ppc64le.whl", hash = "sha256:5f6ccd3dd0b9739edcf407514016108e2280769c73a85b9e59aa390046dbf08b"},
- {file = "kiwisolver-1.3.1-cp36-cp36m-win32.whl", hash = "sha256:225e2e18f271e0ed8157d7f4518ffbf99b9450fca398d561eb5c4a87d0986dd9"},
- {file = "kiwisolver-1.3.1-cp36-cp36m-win_amd64.whl", hash = "sha256:cf8b574c7b9aa060c62116d4181f3a1a4e821b2ec5cbfe3775809474113748d4"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:232c9e11fd7ac3a470d65cd67e4359eee155ec57e822e5220322d7b2ac84fbf0"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:b38694dcdac990a743aa654037ff1188c7a9801ac3ccc548d3341014bc5ca278"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ca3820eb7f7faf7f0aa88de0e54681bddcb46e485beb844fcecbcd1c8bd01689"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:c8fd0f1ae9d92b42854b2979024d7597685ce4ada367172ed7c09edf2cef9cb8"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-manylinux2014_ppc64le.whl", hash = "sha256:1e1bc12fb773a7b2ffdeb8380609f4f8064777877b2225dec3da711b421fda31"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-win32.whl", hash = "sha256:72c99e39d005b793fb7d3d4e660aed6b6281b502e8c1eaf8ee8346023c8e03bc"},
- {file = "kiwisolver-1.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:8be8d84b7d4f2ba4ffff3665bcd0211318aa632395a1a41553250484a871d454"},
- {file = "kiwisolver-1.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:31dfd2ac56edc0ff9ac295193eeaea1c0c923c0355bf948fbd99ed6018010b72"},
- {file = "kiwisolver-1.3.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:563c649cfdef27d081c84e72a03b48ea9408c16657500c312575ae9d9f7bc1c3"},
- {file = "kiwisolver-1.3.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:78751b33595f7f9511952e7e60ce858c6d64db2e062afb325985ddbd34b5c131"},
- {file = "kiwisolver-1.3.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:a357fd4f15ee49b4a98b44ec23a34a95f1e00292a139d6015c11f55774ef10de"},
- {file = "kiwisolver-1.3.1-cp38-cp38-manylinux2014_ppc64le.whl", hash = "sha256:5989db3b3b34b76c09253deeaf7fbc2707616f130e166996606c284395da3f18"},
- {file = "kiwisolver-1.3.1-cp38-cp38-win32.whl", hash = "sha256:c08e95114951dc2090c4a630c2385bef681cacf12636fb0241accdc6b303fd81"},
- {file = "kiwisolver-1.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:44a62e24d9b01ba94ae7a4a6c3fb215dc4af1dde817e7498d901e229aaf50e4e"},
- {file = "kiwisolver-1.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:50af681a36b2a1dee1d3c169ade9fdc59207d3c31e522519181e12f1b3ba7000"},
- {file = "kiwisolver-1.3.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:a53d27d0c2a0ebd07e395e56a1fbdf75ffedc4a05943daf472af163413ce9598"},
- {file = "kiwisolver-1.3.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:834ee27348c4aefc20b479335fd422a2c69db55f7d9ab61721ac8cd83eb78882"},
- {file = "kiwisolver-1.3.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:5c3e6455341008a054cccee8c5d24481bcfe1acdbc9add30aa95798e95c65621"},
- {file = "kiwisolver-1.3.1-cp39-cp39-manylinux2014_ppc64le.whl", hash = "sha256:acef3d59d47dd85ecf909c359d0fd2c81ed33bdff70216d3956b463e12c38a54"},
- {file = "kiwisolver-1.3.1-cp39-cp39-win32.whl", hash = "sha256:c5518d51a0735b1e6cee1fdce66359f8d2b59c3ca85dc2b0813a8aa86818a030"},
- {file = "kiwisolver-1.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:b9edd0110a77fc321ab090aaa1cfcaba1d8499850a12848b81be2222eab648f6"},
- {file = "kiwisolver-1.3.1-pp36-pypy36_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0cd53f403202159b44528498de18f9285b04482bab2a6fc3f5dd8dbb9352e30d"},
- {file = "kiwisolver-1.3.1-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:33449715e0101e4d34f64990352bce4095c8bf13bed1b390773fc0a7295967b3"},
- {file = "kiwisolver-1.3.1-pp36-pypy36_pp73-win32.whl", hash = "sha256:401a2e9afa8588589775fe34fc22d918ae839aaaf0c0e96441c0fdbce6d8ebe6"},
- {file = "kiwisolver-1.3.1.tar.gz", hash = "sha256:950a199911a8d94683a6b10321f9345d5a3a8433ec58b217ace979e18f16e248"},
-]
-markdown = [
- {file = "Markdown-3.3.4-py3-none-any.whl", hash = "sha256:96c3ba1261de2f7547b46a00ea8463832c921d3f9d6aba3f255a6f71386db20c"},
- {file = "Markdown-3.3.4.tar.gz", hash = "sha256:31b5b491868dcc87d6c24b7e3d19a0d730d59d3e46f4eea6430a321bed387a49"},
-]
-matplotlib = [
- {file = "matplotlib-3.3.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:672960dd114e342b7c610bf32fb99d14227f29919894388b41553217457ba7ef"},
- {file = "matplotlib-3.3.4-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:7c155437ae4fd366e2700e2716564d1787700687443de46bcb895fe0f84b761d"},
- {file = "matplotlib-3.3.4-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:a17f0a10604fac7627ec82820439e7db611722e80c408a726cd00d8c974c2fb3"},
- {file = "matplotlib-3.3.4-cp36-cp36m-win32.whl", hash = "sha256:215e2a30a2090221a9481db58b770ce56b8ef46f13224ae33afe221b14b24dc1"},
- {file = "matplotlib-3.3.4-cp36-cp36m-win_amd64.whl", hash = "sha256:348e6032f666ffd151b323342f9278b16b95d4a75dfacae84a11d2829a7816ae"},
- {file = "matplotlib-3.3.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:94bdd1d55c20e764d8aea9d471d2ae7a7b2c84445e0fa463f02e20f9730783e1"},
- {file = "matplotlib-3.3.4-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:a1acb72f095f1d58ecc2538ed1b8bca0b57df313b13db36ed34b8cdf1868e674"},
- {file = "matplotlib-3.3.4-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:46b1a60a04e6d884f0250d5cc8dc7bd21a9a96c584a7acdaab44698a44710bab"},
- {file = "matplotlib-3.3.4-cp37-cp37m-win32.whl", hash = "sha256:ed4a9e6dcacba56b17a0a9ac22ae2c72a35b7f0ef0693aa68574f0b2df607a89"},
- {file = "matplotlib-3.3.4-cp37-cp37m-win_amd64.whl", hash = "sha256:c24c05f645aef776e8b8931cb81e0f1632d229b42b6d216e30836e2e145a2b40"},
- {file = "matplotlib-3.3.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7310e353a4a35477c7f032409966920197d7df3e757c7624fd842f3eeb307d3d"},
- {file = "matplotlib-3.3.4-cp38-cp38-manylinux1_i686.whl", hash = "sha256:451cc89cb33d6652c509fc6b588dc51c41d7246afdcc29b8624e256b7663ed1f"},
- {file = "matplotlib-3.3.4-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:3d2eb9c1cc254d0ffa90bc96fde4b6005d09c2228f99dfd493a4219c1af99644"},
- {file = "matplotlib-3.3.4-cp38-cp38-win32.whl", hash = "sha256:e15fa23d844d54e7b3b7243afd53b7567ee71c721f592deb0727ee85e668f96a"},
- {file = "matplotlib-3.3.4-cp38-cp38-win_amd64.whl", hash = "sha256:1de0bb6cbfe460725f0e97b88daa8643bcf9571c18ba90bb8e41432aaeca91d6"},
- {file = "matplotlib-3.3.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f44149a0ef5b4991aaef12a93b8e8d66d6412e762745fea1faa61d98524e0ba9"},
- {file = "matplotlib-3.3.4-cp39-cp39-manylinux1_i686.whl", hash = "sha256:746a1df55749629e26af7f977ea426817ca9370ad1569436608dc48d1069b87c"},
- {file = "matplotlib-3.3.4-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:5f571b92a536206f7958f7cb2d367ff6c9a1fa8229dc35020006e4cdd1ca0acd"},
- {file = "matplotlib-3.3.4-cp39-cp39-win32.whl", hash = "sha256:9265ae0fb35e29f9b8cc86c2ab0a2e3dcddc4dd9de4b85bf26c0f63fe5c1c2ca"},
- {file = "matplotlib-3.3.4-cp39-cp39-win_amd64.whl", hash = "sha256:9a79e5dd7bb797aa611048f5b70588b23c5be05b63eefd8a0d152ac77c4243db"},
- {file = "matplotlib-3.3.4-pp36-pypy36_pp73-macosx_10_9_x86_64.whl", hash = "sha256:1e850163579a8936eede29fad41e202b25923a0a8d5ffd08ce50fc0a97dcdc93"},
- {file = "matplotlib-3.3.4-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:d738acfdfb65da34c91acbdb56abed46803db39af259b7f194dc96920360dbe4"},
- {file = "matplotlib-3.3.4-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:aa49571d8030ad0b9ac39708ee77bd2a22f87815e12bdee52ecaffece9313ed8"},
- {file = "matplotlib-3.3.4-pp37-pypy37_pp73-manylinux2010_x86_64.whl", hash = "sha256:cf3a7e54eff792f0815dbbe9b85df2f13d739289c93d346925554f71d484be78"},
- {file = "matplotlib-3.3.4.tar.gz", hash = "sha256:3e477db76c22929e4c6876c44f88d790aacdf3c3f8f3a90cb1975c0bf37825b0"},
-]
-networkx = [
- {file = "networkx-2.5.1-py3-none-any.whl", hash = "sha256:0635858ed7e989f4c574c2328380b452df892ae85084144c73d8cd819f0c4e06"},
- {file = "networkx-2.5.1.tar.gz", hash = "sha256:109cd585cac41297f71103c3c42ac6ef7379f29788eb54cb751be5a663bb235a"},
-]
-numpy = [
- {file = "numpy-1.19.5-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc6bd4fd593cb261332568485e20a0712883cf631f6f5e8e86a52caa8b2b50ff"},
- {file = "numpy-1.19.5-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:aeb9ed923be74e659984e321f609b9ba54a48354bfd168d21a2b072ed1e833ea"},
- {file = "numpy-1.19.5-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8b5e972b43c8fc27d56550b4120fe6257fdc15f9301914380b27f74856299fea"},
- {file = "numpy-1.19.5-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:43d4c81d5ffdff6bae58d66a3cd7f54a7acd9a0e7b18d97abb255defc09e3140"},
- {file = "numpy-1.19.5-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:a4646724fba402aa7504cd48b4b50e783296b5e10a524c7a6da62e4a8ac9698d"},
- {file = "numpy-1.19.5-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:2e55195bc1c6b705bfd8ad6f288b38b11b1af32f3c8289d6c50d47f950c12e76"},
- {file = "numpy-1.19.5-cp36-cp36m-win32.whl", hash = "sha256:39b70c19ec771805081578cc936bbe95336798b7edf4732ed102e7a43ec5c07a"},
- {file = "numpy-1.19.5-cp36-cp36m-win_amd64.whl", hash = "sha256:dbd18bcf4889b720ba13a27ec2f2aac1981bd41203b3a3b27ba7a33f88ae4827"},
- {file = "numpy-1.19.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:603aa0706be710eea8884af807b1b3bc9fb2e49b9f4da439e76000f3b3c6ff0f"},
- {file = "numpy-1.19.5-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:cae865b1cae1ec2663d8ea56ef6ff185bad091a5e33ebbadd98de2cfa3fa668f"},
- {file = "numpy-1.19.5-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:36674959eed6957e61f11c912f71e78857a8d0604171dfd9ce9ad5cbf41c511c"},
- {file = "numpy-1.19.5-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:06fab248a088e439402141ea04f0fffb203723148f6ee791e9c75b3e9e82f080"},
- {file = "numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:6149a185cece5ee78d1d196938b2a8f9d09f5a5ebfbba66969302a778d5ddd1d"},
- {file = "numpy-1.19.5-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:50a4a0ad0111cc1b71fa32dedd05fa239f7fb5a43a40663269bb5dc7877cfd28"},
- {file = "numpy-1.19.5-cp37-cp37m-win32.whl", hash = "sha256:d051ec1c64b85ecc69531e1137bb9751c6830772ee5c1c426dbcfe98ef5788d7"},
- {file = "numpy-1.19.5-cp37-cp37m-win_amd64.whl", hash = "sha256:a12ff4c8ddfee61f90a1633a4c4afd3f7bcb32b11c52026c92a12e1325922d0d"},
- {file = "numpy-1.19.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:cf2402002d3d9f91c8b01e66fbb436a4ed01c6498fffed0e4c7566da1d40ee1e"},
- {file = "numpy-1.19.5-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1ded4fce9cfaaf24e7a0ab51b7a87be9038ea1ace7f34b841fe3b6894c721d1c"},
- {file = "numpy-1.19.5-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:012426a41bc9ab63bb158635aecccc7610e3eff5d31d1eb43bc099debc979d94"},
- {file = "numpy-1.19.5-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:759e4095edc3c1b3ac031f34d9459fa781777a93ccc633a472a5468587a190ff"},
- {file = "numpy-1.19.5-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:a9d17f2be3b427fbb2bce61e596cf555d6f8a56c222bd2ca148baeeb5e5c783c"},
- {file = "numpy-1.19.5-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:99abf4f353c3d1a0c7a5f27699482c987cf663b1eac20db59b8c7b061eabd7fc"},
- {file = "numpy-1.19.5-cp38-cp38-win32.whl", hash = "sha256:384ec0463d1c2671170901994aeb6dce126de0a95ccc3976c43b0038a37329c2"},
- {file = "numpy-1.19.5-cp38-cp38-win_amd64.whl", hash = "sha256:811daee36a58dc79cf3d8bdd4a490e4277d0e4b7d103a001a4e73ddb48e7e6aa"},
- {file = "numpy-1.19.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c843b3f50d1ab7361ca4f0b3639bf691569493a56808a0b0c54a051d260b7dbd"},
- {file = "numpy-1.19.5-cp39-cp39-manylinux1_i686.whl", hash = "sha256:d6631f2e867676b13026e2846180e2c13c1e11289d67da08d71cacb2cd93d4aa"},
- {file = "numpy-1.19.5-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:7fb43004bce0ca31d8f13a6eb5e943fa73371381e53f7074ed21a4cb786c32f8"},
- {file = "numpy-1.19.5-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2ea52bd92ab9f768cc64a4c3ef8f4b2580a17af0a5436f6126b08efbd1838371"},
- {file = "numpy-1.19.5-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:400580cbd3cff6ffa6293df2278c75aef2d58d8d93d3c5614cd67981dae68ceb"},
- {file = "numpy-1.19.5-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:df609c82f18c5b9f6cb97271f03315ff0dbe481a2a02e56aeb1b1a985ce38e60"},
- {file = "numpy-1.19.5-cp39-cp39-win32.whl", hash = "sha256:ab83f24d5c52d60dbc8cd0528759532736b56db58adaa7b5f1f76ad551416a1e"},
- {file = "numpy-1.19.5-cp39-cp39-win_amd64.whl", hash = "sha256:0eef32ca3132a48e43f6a0f5a82cb508f22ce5a3d6f67a8329c81c8e226d3f6e"},
- {file = "numpy-1.19.5-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:a0d53e51a6cb6f0d9082decb7a4cb6dfb33055308c4c44f53103c073f649af73"},
- {file = "numpy-1.19.5.zip", hash = "sha256:a76f502430dd98d7546e1ea2250a7360c065a5fdea52b2dffe8ae7180909b6f4"},
-]
-oauthlib = [
- {file = "oauthlib-3.1.0-py2.py3-none-any.whl", hash = "sha256:df884cd6cbe20e32633f1db1072e9356f53638e4361bef4e8b03c9127c9328ea"},
- {file = "oauthlib-3.1.0.tar.gz", hash = "sha256:bee41cc35fcca6e988463cacc3bcb8a96224f470ca547e697b604cc697b2f889"},
-]
-opencv-python = [
- {file = "opencv-python-4.5.1.48.tar.gz", hash = "sha256:78a6db8467639383caedf1d111da3510a4ee1a0aacf2117821cae2ee8f92ce37"},
- {file = "opencv_python-4.5.1.48-cp36-cp36m-macosx_10_13_x86_64.whl", hash = "sha256:bcb27773cfd5340b2b599b303d9f5499838ef4780c20c038f6030175408c64df"},
- {file = "opencv_python-4.5.1.48-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:9646875c501788b1b098f282d777b667d6da69801739504f1b2fd1268970d1da"},
- {file = "opencv_python-4.5.1.48-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:ebe83901971a6755512424c4fe9f63341cca501b7c497bf608dd38ee31ba3f4c"},
- {file = "opencv_python-4.5.1.48-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:d8aefcb30b71064dbbaa2b0ace161a36464c29375a83998fbda39a1d1740f942"},
- {file = "opencv_python-4.5.1.48-cp36-cp36m-win32.whl", hash = "sha256:32dee1c9fd3e31e28edef7b56f868e2b40e280b7062304f9fb8a14dbc51547d5"},
- {file = "opencv_python-4.5.1.48-cp36-cp36m-win_amd64.whl", hash = "sha256:9c77d508e6822f1f40c727d21b822d017622d8305dce7eccf0ab06caac16d5c6"},
- {file = "opencv_python-4.5.1.48-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:4982fa8ccc38310a2bd93e06334ba090b12b6aff2f6fcb8ff9613e3c9bc48f48"},
- {file = "opencv_python-4.5.1.48-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:c0503bfaa2b7b743d6ff5d81f1dd8428dbf4c33e7e4f836456d11be20c2e7721"},
- {file = "opencv_python-4.5.1.48-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:e27d062fa1098d90f48b6c047351c89816492a08906a021c973ce510b04a7b9d"},
- {file = "opencv_python-4.5.1.48-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:6d8434a45e8f75c4da5fd0068ce001f4f8e35771cc851d746d4721eeaf517e25"},
- {file = "opencv_python-4.5.1.48-cp37-cp37m-win32.whl", hash = "sha256:e2c17714da59d9d516ceef0450766ff9557ee232d62f702665af905193557582"},
- {file = "opencv_python-4.5.1.48-cp37-cp37m-win_amd64.whl", hash = "sha256:efac9893d9e21cfb599828801c755ecde8f1e657f05ec6f002efe19422456d5a"},
- {file = "opencv_python-4.5.1.48-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:e77d0feaff37326f62b127098264e2a7099deb476e38432b1083ce11cdedf560"},
- {file = "opencv_python-4.5.1.48-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:ffc75c614b8dc3d8102f3ba15dafd6ec0400c7ffa71a91953d41511964ee50e0"},
- {file = "opencv_python-4.5.1.48-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:c1159d91f29a85c3333edef6ca420284566d9bcdae46dda2fe7282515b48c8b6"},
- {file = "opencv_python-4.5.1.48-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:d16144c435b816c5536d5ff012c1a2b7e93155017db7103942ff7efb98c4df1f"},
- {file = "opencv_python-4.5.1.48-cp38-cp38-win32.whl", hash = "sha256:b2b9ac86aec5f2dd531545cebdea1a1ef4f81ef1fb1760d78b4725f9575504f9"},
- {file = "opencv_python-4.5.1.48-cp38-cp38-win_amd64.whl", hash = "sha256:30edebc81b260bcfeb760b3600c367c5261dfb2fe41e5d1408d5357d0867b40d"},
- {file = "opencv_python-4.5.1.48-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:e38fbd7b2db03204ec09930609b7313d6b6d2b271c8fe2c0aa271fa69b726a1b"},
- {file = "opencv_python-4.5.1.48-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:fc1472b825d26c8a4f1cfb172a90c3cc47733e4af7522276c1c2efe8f6006a8b"},
- {file = "opencv_python-4.5.1.48-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:c4ea4f8b217f3e8be6247fc0787fb81797d85202c722523f41070124a7a621c7"},
- {file = "opencv_python-4.5.1.48-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:a1dfa0486db367594510c0c799ec7481247dc86e651b69008806d875ab731471"},
- {file = "opencv_python-4.5.1.48-cp39-cp39-win32.whl", hash = "sha256:5172cb37dfd8a0b4945b071a493eb36e5f17675a160637fa380f9c1d9d80535c"},
- {file = "opencv_python-4.5.1.48-cp39-cp39-win_amd64.whl", hash = "sha256:c8cc1f5ff3c352ebe756119014c4e4ec7ae5ac536d1f66b0316667ced37637c8"},
-]
-pillow = [
- {file = "Pillow-8.2.0-cp36-cp36m-macosx_10_10_x86_64.whl", hash = "sha256:dc38f57d8f20f06dd7c3161c59ca2c86893632623f33a42d592f097b00f720a9"},
- {file = "Pillow-8.2.0-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:a013cbe25d20c2e0c4e85a9daf438f85121a4d0344ddc76e33fd7e3965d9af4b"},
- {file = "Pillow-8.2.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8bb1e155a74e1bfbacd84555ea62fa21c58e0b4e7e6b20e4447b8d07990ac78b"},
- {file = "Pillow-8.2.0-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c5236606e8570542ed424849f7852a0ff0bce2c4c8d0ba05cc202a5a9c97dee9"},
- {file = "Pillow-8.2.0-cp36-cp36m-win32.whl", hash = "sha256:12e5e7471f9b637762453da74e390e56cc43e486a88289995c1f4c1dc0bfe727"},
- {file = "Pillow-8.2.0-cp36-cp36m-win_amd64.whl", hash = "sha256:5afe6b237a0b81bd54b53f835a153770802f164c5570bab5e005aad693dab87f"},
- {file = "Pillow-8.2.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:cb7a09e173903541fa888ba010c345893cd9fc1b5891aaf060f6ca77b6a3722d"},
- {file = "Pillow-8.2.0-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:0d19d70ee7c2ba97631bae1e7d4725cdb2ecf238178096e8c82ee481e189168a"},
- {file = "Pillow-8.2.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:083781abd261bdabf090ad07bb69f8f5599943ddb539d64497ed021b2a67e5a9"},
- {file = "Pillow-8.2.0-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:c6b39294464b03457f9064e98c124e09008b35a62e3189d3513e5148611c9388"},
- {file = "Pillow-8.2.0-cp37-cp37m-win32.whl", hash = "sha256:01425106e4e8cee195a411f729cff2a7d61813b0b11737c12bd5991f5f14bcd5"},
- {file = "Pillow-8.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b570f84a6161cf8865c4e08adf629441f56e32f180f7aa4ccbd2e0a5a02cba2"},
- {file = "Pillow-8.2.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:031a6c88c77d08aab84fecc05c3cde8414cd6f8406f4d2b16fed1e97634cc8a4"},
- {file = "Pillow-8.2.0-cp38-cp38-manylinux1_i686.whl", hash = "sha256:66cc56579fd91f517290ab02c51e3a80f581aba45fd924fcdee01fa06e635812"},
- {file = "Pillow-8.2.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:6c32cc3145928c4305d142ebec682419a6c0a8ce9e33db900027ddca1ec39178"},
- {file = "Pillow-8.2.0-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:624b977355cde8b065f6d51b98497d6cd5fbdd4f36405f7a8790e3376125e2bb"},
- {file = "Pillow-8.2.0-cp38-cp38-win32.whl", hash = "sha256:5cbf3e3b1014dddc45496e8cf38b9f099c95a326275885199f427825c6522232"},
- {file = "Pillow-8.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:463822e2f0d81459e113372a168f2ff59723e78528f91f0bd25680ac185cf797"},
- {file = "Pillow-8.2.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:95d5ef984eff897850f3a83883363da64aae1000e79cb3c321915468e8c6add5"},
- {file = "Pillow-8.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b91c36492a4bbb1ee855b7d16fe51379e5f96b85692dc8210831fbb24c43e484"},
- {file = "Pillow-8.2.0-cp39-cp39-manylinux1_i686.whl", hash = "sha256:d68cb92c408261f806b15923834203f024110a2e2872ecb0bd2a110f89d3c602"},
- {file = "Pillow-8.2.0-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f217c3954ce5fd88303fc0c317af55d5e0204106d86dea17eb8205700d47dec2"},
- {file = "Pillow-8.2.0-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:5b70110acb39f3aff6b74cf09bb4169b167e2660dabc304c1e25b6555fa781ef"},
- {file = "Pillow-8.2.0-cp39-cp39-win32.whl", hash = "sha256:a7d5e9fad90eff8f6f6106d3b98b553a88b6f976e51fce287192a5d2d5363713"},
- {file = "Pillow-8.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:238c197fc275b475e87c1453b05b467d2d02c2915fdfdd4af126145ff2e4610c"},
- {file = "Pillow-8.2.0-pp36-pypy36_pp73-macosx_10_10_x86_64.whl", hash = "sha256:0e04d61f0064b545b989126197930807c86bcbd4534d39168f4aa5fda39bb8f9"},
- {file = "Pillow-8.2.0-pp36-pypy36_pp73-manylinux2010_i686.whl", hash = "sha256:63728564c1410d99e6d1ae8e3b810fe012bc440952168af0a2877e8ff5ab96b9"},
- {file = "Pillow-8.2.0-pp36-pypy36_pp73-manylinux2010_x86_64.whl", hash = "sha256:c03c07ed32c5324939b19e36ae5f75c660c81461e312a41aea30acdd46f93a7c"},
- {file = "Pillow-8.2.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:4d98abdd6b1e3bf1a1cbb14c3895226816e666749ac040c4e2554231068c639b"},
- {file = "Pillow-8.2.0-pp37-pypy37_pp73-manylinux2010_i686.whl", hash = "sha256:aac00e4bc94d1b7813fe882c28990c1bc2f9d0e1aa765a5f2b516e8a6a16a9e4"},
- {file = "Pillow-8.2.0-pp37-pypy37_pp73-manylinux2010_x86_64.whl", hash = "sha256:22fd0f42ad15dfdde6c581347eaa4adb9a6fc4b865f90b23378aa7914895e120"},
- {file = "Pillow-8.2.0-pp37-pypy37_pp73-win32.whl", hash = "sha256:e98eca29a05913e82177b3ba3d198b1728e164869c613d76d0de4bde6768a50e"},
- {file = "Pillow-8.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:8b56553c0345ad6dcb2e9b433ae47d67f95fc23fe28a0bde15a120f25257e291"},
- {file = "Pillow-8.2.0.tar.gz", hash = "sha256:a787ab10d7bb5494e5f76536ac460741788f1fbce851068d73a87ca7c35fc3e1"},
-]
-profilehooks = [
- {file = "profilehooks-1.12.0-py2.py3-none-any.whl", hash = "sha256:dc87f319c9596b8c50fd374e3c08c51fa29a61553f1d9281482e4ca31829b021"},
- {file = "profilehooks-1.12.0.tar.gz", hash = "sha256:05b87589df8a8c630fd701bae6008cc1cfff4457bd0064887ad25248327a5ba3"},
-]
-protobuf = [
- {file = "protobuf-3.15.8-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:fad4f971ec38d8df7f4b632c819bf9bbf4f57cfd7312cf526c69ce17ef32436a"},
- {file = "protobuf-3.15.8-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:f17b352d7ce33c81773cf81d536ca70849de6f73c96413f17309f4b43ae7040b"},
- {file = "protobuf-3.15.8-cp35-cp35m-macosx_10_9_intel.whl", hash = "sha256:4a054b0b5900b7ea7014099e783fb8c4618e4209fffcd6050857517b3f156e18"},
- {file = "protobuf-3.15.8-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:efa4c4d4fc9ba734e5e85eaced70e1b63fb3c8d08482d839eb838566346f1737"},
- {file = "protobuf-3.15.8-cp35-cp35m-win32.whl", hash = "sha256:07eec4e2ccbc74e95bb9b3afe7da67957947ee95bdac2b2e91b038b832dd71f0"},
- {file = "protobuf-3.15.8-cp35-cp35m-win_amd64.whl", hash = "sha256:f9cadaaa4065d5dd4d15245c3b68b967b3652a3108e77f292b58b8c35114b56c"},
- {file = "protobuf-3.15.8-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:2dc0e8a9e4962207bdc46a365b63a3f1aca6f9681a5082a326c5837ef8f4b745"},
- {file = "protobuf-3.15.8-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f80afc0a0ba13339bbab25ca0409e9e2836b12bb012364c06e97c2df250c3343"},
- {file = "protobuf-3.15.8-cp36-cp36m-win32.whl", hash = "sha256:c5566f956a26cda3abdfacc0ca2e21db6c9f3d18f47d8d4751f2209d6c1a5297"},
- {file = "protobuf-3.15.8-cp36-cp36m-win_amd64.whl", hash = "sha256:dab75b56a12b1ceb3e40808b5bd9dfdaef3a1330251956e6744e5b6ed8f8830b"},
- {file = "protobuf-3.15.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3053f13207e7f13dc7be5e9071b59b02020172f09f648e85dc77e3fcb50d1044"},
- {file = "protobuf-3.15.8-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:1f0b5d156c3df08cc54bc2c8b8b875648ea4cd7ebb2a9a130669f7547ec3488c"},
- {file = "protobuf-3.15.8-cp37-cp37m-win32.whl", hash = "sha256:90270fe5732c1f1ff664a3bd7123a16456d69b4e66a09a139a00443a32f210b8"},
- {file = "protobuf-3.15.8-cp37-cp37m-win_amd64.whl", hash = "sha256:f42c2f5fb67da5905bfc03733a311f72fa309252bcd77c32d1462a1ad519521e"},
- {file = "protobuf-3.15.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f6077db37bfa16494dca58a4a02bfdacd87662247ad6bc1f7f8d13ff3f0013e1"},
- {file = "protobuf-3.15.8-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:510e66491f1a5ac5953c908aa8300ec47f793130097e4557482803b187a8ee05"},
- {file = "protobuf-3.15.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5ff9fa0e67fcab442af9bc8d4ec3f82cb2ff3be0af62dba047ed4187f0088b7d"},
- {file = "protobuf-3.15.8-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:1c0e9e56202b9dccbc094353285a252e2b7940b74fdf75f1b4e1b137833fabd7"},
- {file = "protobuf-3.15.8-py2.py3-none-any.whl", hash = "sha256:a0a08c6b2e6d6c74a6eb5bf6184968eefb1569279e78714e239d33126e753403"},
- {file = "protobuf-3.15.8.tar.gz", hash = "sha256:0277f62b1e42210cafe79a71628c1d553348da81cbd553402a7f7549c50b11d0"},
-]
-pyasn1 = [
- {file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
- {file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
- {file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
- {file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
- {file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
- {file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
- {file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
- {file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
- {file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
- {file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
- {file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
- {file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
- {file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
-]
-pyasn1-modules = [
- {file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
- {file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
- {file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
- {file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
- {file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
- {file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
- {file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
- {file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
- {file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
- {file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
- {file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
- {file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
- {file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
-]
-pyparsing = [
- {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"},
- {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"},
-]
-python-dateutil = [
- {file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"},
- {file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"},
-]
-pywavelets = [
- {file = "PyWavelets-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:35959c041ec014648575085a97b498eafbbaa824f86f6e4a59bfdef8a3fe6308"},
- {file = "PyWavelets-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:55e39ec848ceec13c9fa1598253ae9dd5c31d09dfd48059462860d2b908fb224"},
- {file = "PyWavelets-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c06d2e340c7bf8b9ec71da2284beab8519a3908eab031f4ea126e8ccfc3fd567"},
- {file = "PyWavelets-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:be105382961745f88d8196bba5a69ee2c4455d87ad2a2e5d1eed6bd7fda4d3fd"},
- {file = "PyWavelets-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:076ca8907001fdfe4205484f719d12b4a0262dfe6652fa1cfc3c5c362d14dc84"},
- {file = "PyWavelets-1.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:7947e51ca05489b85928af52a34fe67022ab5b81d4ae32a4109a99e883a0635e"},
- {file = "PyWavelets-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:9e2528823ccf5a0a1d23262dfefe5034dce89cd84e4e124dc553dfcdf63ebb92"},
- {file = "PyWavelets-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:80b924edbc012ded8aa8b91cb2fd6207fb1a9a3a377beb4049b8a07445cec6f0"},
- {file = "PyWavelets-1.1.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c2a799e79cee81a862216c47e5623c97b95f1abee8dd1f9eed736df23fb653fb"},
- {file = "PyWavelets-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:d510aef84d9852653d079c84f2f81a82d5d09815e625f35c95714e7364570ad4"},
- {file = "PyWavelets-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:889d4c5c5205a9c90118c1980df526857929841df33e4cd1ff1eff77c6817a65"},
- {file = "PyWavelets-1.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:68b5c33741d26c827074b3d8f0251de1c3019bb9567b8d303eb093c822ce28f1"},
- {file = "PyWavelets-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:18a51b3f9416a2ae6e9a35c4af32cf520dd7895f2b69714f4aa2f4342fca47f9"},
- {file = "PyWavelets-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:cfe79844526dd92e3ecc9490b5031fca5f8ab607e1e858feba232b1b788ff0ea"},
- {file = "PyWavelets-1.1.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:2f7429eeb5bf9c7068002d0d7f094ed654c77a70ce5e6198737fd68ab85f8311"},
- {file = "PyWavelets-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:720dbcdd3d91c6dfead79c80bf8b00a1d8aa4e5d551dc528c6d5151e4efc3403"},
- {file = "PyWavelets-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:bc5e87b72371da87c9bebc68e54882aada9c3114e640de180f62d5da95749cd3"},
- {file = "PyWavelets-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:98b2669c5af842a70cfab33a7043fcb5e7535a690a00cd251b44c9be0be418e5"},
- {file = "PyWavelets-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e02a0558e0c2ac8b8bbe6a6ac18c136767ec56b96a321e0dfde2173adfa5a504"},
- {file = "PyWavelets-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:6162dc0ae04669ea04b4b51420777b9ea2d30b0a9d02901b2a3b4d61d159c2e9"},
- {file = "PyWavelets-1.1.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:39c74740718e420d38c78ca4498568fa57976d78d5096277358e0fa9629a7aea"},
- {file = "PyWavelets-1.1.1-cp38-cp38-win32.whl", hash = "sha256:79f5b54f9dc353e5ee47f0c3f02bebd2c899d49780633aa771fed43fa20b3149"},
- {file = "PyWavelets-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:935ff247b8b78bdf77647fee962b1cc208c51a7b229db30b9ba5f6da3e675178"},
- {file = "PyWavelets-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6ebfefebb5c6494a3af41ad8c60248a95da267a24b79ed143723d4502b1fe4d7"},
- {file = "PyWavelets-1.1.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:6bc78fb9c42a716309b4ace56f51965d8b5662c3ba19d4591749f31773db1125"},
- {file = "PyWavelets-1.1.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:411e17ca6ed8cf5e18a7ca5ee06a91c25800cc6c58c77986202abf98d749273a"},
- {file = "PyWavelets-1.1.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:83c5e3eb78ce111c2f0b45f46106cc697c3cb6c4e5f51308e1f81b512c70c8fb"},
- {file = "PyWavelets-1.1.1-cp39-cp39-win32.whl", hash = "sha256:2b634a54241c190ee989a4af87669d377b37c91bcc9cf0efe33c10ff847f7841"},
- {file = "PyWavelets-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:732bab78435c48be5d6bc75486ef629d7c8f112e07b313bf1f1a2220ab437277"},
- {file = "PyWavelets-1.1.1.tar.gz", hash = "sha256:1a64b40f6acb4ffbaccce0545d7fc641744f95351f62e4c6aaa40549326008c9"},
-]
-requests = [
- {file = "requests-2.25.1-py2.py3-none-any.whl", hash = "sha256:c210084e36a42ae6b9219e00e48287def368a26d03a048ddad7bfee44f75871e"},
- {file = "requests-2.25.1.tar.gz", hash = "sha256:27973dd4a904a4f13b263a19c866c13b92a39ed1c964655f025f3f8d3d75b804"},
-]
-requests-oauthlib = [
- {file = "requests-oauthlib-1.3.0.tar.gz", hash = "sha256:b4261601a71fd721a8bd6d7aa1cc1d6a8a93b4a9f5e96626f8e4d91e8beeaa6a"},
- {file = "requests_oauthlib-1.3.0-py2.py3-none-any.whl", hash = "sha256:7f71572defaecd16372f9006f33c2ec8c077c3cfa6f5911a9a90202beb513f3d"},
- {file = "requests_oauthlib-1.3.0-py3.7.egg", hash = "sha256:fa6c47b933f01060936d87ae9327fead68768b69c6c9ea2109c48be30f2d4dbc"},
-]
-rope = [
- {file = "rope-0.19.0.tar.gz", hash = "sha256:64e6d747532e1f5c8009ec5aae3e5523a5bcedf516f39a750d57d8ed749d90da"},
-]
-rsa = [
- {file = "rsa-4.4-py2.py3-none-any.whl", hash = "sha256:4afbaaecc3e9550c7351fdf0ab3fea1857ff616b85bab59215f00fb42e0e9582"},
- {file = "rsa-4.4.tar.gz", hash = "sha256:5d95293bbd0fbee1dd9cb4b72d27b723942eb50584abc8c4f5f00e4bcfa55307"},
-]
-scikit-image = [
- {file = "scikit-image-0.17.2.tar.gz", hash = "sha256:bd954c0588f0f7e81d9763dc95e06950e68247d540476e06cb77bcbcd8c2d8b3"},
- {file = "scikit_image-0.17.2-cp36-cp36m-macosx_10_13_x86_64.whl", hash = "sha256:11eec2e65cd4cd6487fe1089aa3538dbe25525aec7a36f5a0f14145df0163ce7"},
- {file = "scikit_image-0.17.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:c5c277704b12e702e34d1f7b7a04d5ee8418735f535d269c74c02c6c9f8abee2"},
- {file = "scikit_image-0.17.2-cp36-cp36m-win32.whl", hash = "sha256:1fda9109a19dc9d7a4ac152d1fc226fed7282ad186a099f14c0aa9151f0c758e"},
- {file = "scikit_image-0.17.2-cp36-cp36m-win_amd64.whl", hash = "sha256:86a834f9a4d30201c0803a48a25364fe8f93f9feb3c58f2c483d3ce0a3e5fe4a"},
- {file = "scikit_image-0.17.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:87ca5168c6fc36b7a298a1db2d185a8298f549854342020f282f747a4e4ddce9"},
- {file = "scikit_image-0.17.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:e99fa7514320011b250a21ab855fdd61ddcc05d3c77ec9e8f13edcc15d3296b5"},
- {file = "scikit_image-0.17.2-cp37-cp37m-win32.whl", hash = "sha256:ee3db438b5b9f8716a91ab26a61377a8a63356b186706f5b979822cc7241006d"},
- {file = "scikit_image-0.17.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6b65a103edbc34b22640daf3b084dc9e470c358d3298c10aa9e3b424dcc02db6"},
- {file = "scikit_image-0.17.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:c0876e562991b0babff989ff4d00f35067a2ddef82e5fdd895862555ffbaec25"},
- {file = "scikit_image-0.17.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:178210582cc62a5b25c633966658f1f2598615f9c3f27f36cf45055d2a74b401"},
- {file = "scikit_image-0.17.2-cp38-cp38-win32.whl", hash = "sha256:7bedd3881ca4fea657a894815bcd5e5bf80944c26274f6b6417bb770c3f4f8e6"},
- {file = "scikit_image-0.17.2-cp38-cp38-win_amd64.whl", hash = "sha256:113bcacdfc839854f527a166a71768708328208e7b66e491050d6a57fa6727c7"},
-]
-scipy = [
- {file = "scipy-1.5.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:4f12d13ffbc16e988fa40809cbbd7a8b45bc05ff6ea0ba8e3e41f6f4db3a9e47"},
- {file = "scipy-1.5.4-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:a254b98dbcc744c723a838c03b74a8a34c0558c9ac5c86d5561703362231107d"},
- {file = "scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:368c0f69f93186309e1b4beb8e26d51dd6f5010b79264c0f1e9ca00cd92ea8c9"},
- {file = "scipy-1.5.4-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:4598cf03136067000855d6b44d7a1f4f46994164bcd450fb2c3d481afc25dd06"},
- {file = "scipy-1.5.4-cp36-cp36m-win32.whl", hash = "sha256:e98d49a5717369d8241d6cf33ecb0ca72deee392414118198a8e5b4c35c56340"},
- {file = "scipy-1.5.4-cp36-cp36m-win_amd64.whl", hash = "sha256:65923bc3809524e46fb7eb4d6346552cbb6a1ffc41be748535aa502a2e3d3389"},
- {file = "scipy-1.5.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:9ad4fcddcbf5dc67619379782e6aeef41218a79e17979aaed01ed099876c0e62"},
- {file = "scipy-1.5.4-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:f87b39f4d69cf7d7529d7b1098cb712033b17ea7714aed831b95628f483fd012"},
- {file = "scipy-1.5.4-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:25b241034215247481f53355e05f9e25462682b13bd9191359075682adcd9554"},
- {file = "scipy-1.5.4-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:fa789583fc94a7689b45834453fec095245c7e69c58561dc159b5d5277057e4c"},
- {file = "scipy-1.5.4-cp37-cp37m-win32.whl", hash = "sha256:d6d25c41a009e3c6b7e757338948d0076ee1dd1770d1c09ec131f11946883c54"},
- {file = "scipy-1.5.4-cp37-cp37m-win_amd64.whl", hash = "sha256:2c872de0c69ed20fb1a9b9cf6f77298b04a26f0b8720a5457be08be254366c6e"},
- {file = "scipy-1.5.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e360cb2299028d0b0d0f65a5c5e51fc16a335f1603aa2357c25766c8dab56938"},
- {file = "scipy-1.5.4-cp38-cp38-manylinux1_i686.whl", hash = "sha256:3397c129b479846d7eaa18f999369a24322d008fac0782e7828fa567358c36ce"},
- {file = "scipy-1.5.4-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:168c45c0c32e23f613db7c9e4e780bc61982d71dcd406ead746c7c7c2f2004ce"},
- {file = "scipy-1.5.4-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:213bc59191da2f479984ad4ec39406bf949a99aba70e9237b916ce7547b6ef42"},
- {file = "scipy-1.5.4-cp38-cp38-win32.whl", hash = "sha256:634568a3018bc16a83cda28d4f7aed0d803dd5618facb36e977e53b2df868443"},
- {file = "scipy-1.5.4-cp38-cp38-win_amd64.whl", hash = "sha256:b03c4338d6d3d299e8ca494194c0ae4f611548da59e3c038813f1a43976cb437"},
- {file = "scipy-1.5.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3d5db5d815370c28d938cf9b0809dade4acf7aba57eaf7ef733bfedc9b2474c4"},
- {file = "scipy-1.5.4-cp39-cp39-manylinux1_i686.whl", hash = "sha256:6b0ceb23560f46dd236a8ad4378fc40bad1783e997604ba845e131d6c680963e"},
- {file = "scipy-1.5.4-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:ed572470af2438b526ea574ff8f05e7f39b44ac37f712105e57fc4d53a6fb660"},
- {file = "scipy-1.5.4-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:8c8d6ca19c8497344b810b0b0344f8375af5f6bb9c98bd42e33f747417ab3f57"},
- {file = "scipy-1.5.4-cp39-cp39-win32.whl", hash = "sha256:d84cadd7d7998433334c99fa55bcba0d8b4aeff0edb123b2a1dfcface538e474"},
- {file = "scipy-1.5.4-cp39-cp39-win_amd64.whl", hash = "sha256:cc1f78ebc982cd0602c9a7615d878396bec94908db67d4ecddca864d049112f2"},
- {file = "scipy-1.5.4.tar.gz", hash = "sha256:4a453d5e5689de62e5d38edf40af3f17560bfd63c9c5bd228c18c1f99afa155b"},
-]
-shapely = [
- {file = "Shapely-1.7.1-1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:46da0ea527da9cf9503e66c18bab6981c5556859e518fe71578b47126e54ca93"},
- {file = "Shapely-1.7.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:4c10f317e379cc404f8fc510cd9982d5d3e7ba13a9cfd39aa251d894c6366798"},
- {file = "Shapely-1.7.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:17df66e87d0fe0193910aeaa938c99f0b04f67b430edb8adae01e7be557b141b"},
- {file = "Shapely-1.7.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:da38ed3d65b8091447dc3717e5218cc336d20303b77b0634b261bc5c1aa2bae8"},
- {file = "Shapely-1.7.1-cp35-cp35m-win32.whl", hash = "sha256:8e7659dd994792a0aad8fb80439f59055a21163e236faf2f9823beb63a380e19"},
- {file = "Shapely-1.7.1-cp35-cp35m-win_amd64.whl", hash = "sha256:791477edb422692e7dc351c5ed6530eb0e949a31b45569946619a0d9cd5f53cb"},
- {file = "Shapely-1.7.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3afccf0437edc108eef1e2bb9cc4c7073e7705924eb4cd0bf7715cd1ef0ce1b"},
- {file = "Shapely-1.7.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8f15b6ce67dcc05b61f19c689b60f3fe58550ba994290ff8332f711f5aaa9840"},
- {file = "Shapely-1.7.1-cp36-cp36m-win32.whl", hash = "sha256:60e5b2282619249dbe8dc5266d781cc7d7fb1b27fa49f8241f2167672ad26719"},
- {file = "Shapely-1.7.1-cp36-cp36m-win_amd64.whl", hash = "sha256:de618e67b64a51a0768d26a9963ecd7d338a2cf6e9e7582d2385f88ad005b3d1"},
- {file = "Shapely-1.7.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:182716ffb500d114b5d1b75d7fd9d14b7d3414cef3c38c0490534cc9ce20981a"},
- {file = "Shapely-1.7.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:4f3c59f6dbf86a9fc293546de492f5e07344e045f9333f3a753f2dda903c45d1"},
- {file = "Shapely-1.7.1-cp37-cp37m-win32.whl", hash = "sha256:6871acba8fbe744efa4f9f34e726d070bfbf9bffb356a8f6d64557846324232b"},
- {file = "Shapely-1.7.1-cp37-cp37m-win_amd64.whl", hash = "sha256:35be1c5d869966569d3dfd4ec31832d7c780e9df760e1fe52131105685941891"},
- {file = "Shapely-1.7.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:052eb5b9ba756808a7825e8a8020fb146ec489dd5c919e7d139014775411e688"},
- {file = "Shapely-1.7.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:90a3e2ae0d6d7d50ff2370ba168fbd416a53e7d8448410758c5d6a5920646c1d"},
- {file = "Shapely-1.7.1-cp38-cp38-win32.whl", hash = "sha256:a3774516c8a83abfd1ddffb8b6ec1b0935d7fe6ea0ff5c31a18bfdae567b4eba"},
- {file = "Shapely-1.7.1-cp38-cp38-win_amd64.whl", hash = "sha256:6593026cd3f5daaea12bcc51ae5c979318070fefee210e7990cb8ac2364e79a1"},
- {file = "Shapely-1.7.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:617bf046a6861d7c6b44d2d9cb9e2311548638e684c2cd071d8945f24a926263"},
- {file = "Shapely-1.7.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:b40cc7bb089ae4aa9ddba1db900b4cd1bce3925d2a4b5837b639e49de054784f"},
- {file = "Shapely-1.7.1-cp39-cp39-win32.whl", hash = "sha256:2df5260d0f2983309776cb41bfa85c464ec07018d88c0ecfca23d40bfadae2f1"},
- {file = "Shapely-1.7.1-cp39-cp39-win_amd64.whl", hash = "sha256:a5c3a50d823c192f32615a2a6920e8c046b09e07a58eba220407335a9cd2e8ea"},
- {file = "Shapely-1.7.1.tar.gz", hash = "sha256:1641724c1055459a7e2b8bbe47ba25bdc89554582e62aec23cb3f3ca25f9b129"},
-]
-six = [
- {file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"},
- {file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"},
-]
-tensorboard = [
- {file = "tensorboard-2.5.0-py3-none-any.whl", hash = "sha256:e167460085b6528956b33bab1c970c989cdce47a6616273880733f5e7bde452e"},
-]
-tensorboard-data-server = [
- {file = "tensorboard_data_server-0.6.0-py3-none-any.whl", hash = "sha256:a4b8e1c3fc85237b3afeef450db06c9a9b25f5854ad27c21667a90808acd1822"},
- {file = "tensorboard_data_server-0.6.0-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:2d723d73e3a3b0a4498f56c64c39e2e26ac192414891df22c9f152b7058fd6bc"},
- {file = "tensorboard_data_server-0.6.0-py3-none-manylinux2010_x86_64.whl", hash = "sha256:b620e520d3d535ceb896557acca0029fd7fd2f9f408af35abc2d2dad91f0345d"},
-]
-tensorboard-plugin-wit = [
- {file = "tensorboard_plugin_wit-1.8.0-py3-none-any.whl", hash = "sha256:2a80d1c551d741e99b2f197bb915d8a133e24adb8da1732b840041860f91183a"},
-]
-terminaltables = [
- {file = "terminaltables-3.1.0.tar.gz", hash = "sha256:f3eb0eb92e3833972ac36796293ca0906e998dc3be91fbe1f8615b331b853b81"},
-]
-tifffile = [
- {file = "tifffile-2020.9.3-py3-none-any.whl", hash = "sha256:e7c03c5827def91bec6e353e728f4bd02f35f08b142cd520f66b21f31ff4402b"},
- {file = "tifffile-2020.9.3.tar.gz", hash = "sha256:5b5f079d61c473795d71aca4e91068811fbb43f6f115e3ef9e77f079c23b17c4"},
-]
-torch = [
- {file = "torch-1.7.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:422e64e98d0e100c360993819d0307e5d56e9517b26135808ad68984d577d75a"},
- {file = "torch-1.7.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f0aaf657145533824b15f2fd8fde8f8c67fe6c6281088ef588091f03fad90243"},
- {file = "torch-1.7.1-cp36-none-macosx_10_9_x86_64.whl", hash = "sha256:af464a6f4314a875035e0c4c2b07517599704b214634f4ed3ad2e748c5ef291f"},
- {file = "torch-1.7.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5d76c255a41484c1d41a9ff570b9c9f36cb85df9428aa15a58ae16ac7cfc2ea6"},
- {file = "torch-1.7.1-cp37-cp37m-win_amd64.whl", hash = "sha256:d241c3f1c4d563e4ba86f84769c23e12606db167ee6f674eedff6d02901462e3"},
- {file = "torch-1.7.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:de84b4166e3f7335eb868b51d3bbd909ec33828af27290b4171bce832a55be3c"},
- {file = "torch-1.7.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:dd2fc6880c95e836960d86efbbc7f63d3287f2e1893c51d31f96dbfe02f0d73e"},
- {file = "torch-1.7.1-cp38-cp38-win_amd64.whl", hash = "sha256:e000b94be3aa58ad7f61e7d07cf379ea9366cf6c6874e68bd58ad0bdc537b3a7"},
- {file = "torch-1.7.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:2e49cac969976be63117004ee00d0a3e3dd4ea662ad77383f671b8992825de1a"},
- {file = "torch-1.7.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:a3793dcceb12b1e2281290cca1277c5ce86ddfd5bf044f654285a4d69057aea7"},
- {file = "torch-1.7.1-cp39-cp39-win_amd64.whl", hash = "sha256:6652a767a0572ae0feb74ad128758e507afd3b8396b6e7f147e438ba8d4c6f63"},
- {file = "torch-1.7.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:38d67f4fb189a92a977b2c0a38e4f6dd413e0bf55aa6d40004696df7e40a71ff"},
-]
-torchsummary = [
- {file = "torchsummary-1.5.1-py3-none-any.whl", hash = "sha256:10f41d1743fb918f83293f13183f532ab1bb8f6639a1b89e5f8592ec1919a976"},
- {file = "torchsummary-1.5.1.tar.gz", hash = "sha256:981bf689e22e0cf7f95c746002f20a24ad26aa6b9d861134a14bc6ce92230590"},
-]
-torchvision = [
- {file = "torchvision-0.8.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:86fae370d222f76ad57c57c3bee03f78b8db727743bfb4c1559a3d395159cea8"},
- {file = "torchvision-0.8.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:951239b5fcb911dbf78c1385d677f5f48c7a1b12859e3d3ec287562821b17cf2"},
- {file = "torchvision-0.8.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:24db8f4c3d812a032273f68563ad5dbd724f5bfbed523d0c6dce8cede26bb153"},
- {file = "torchvision-0.8.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:b068f6bcbe91bdd34dda0a39e8a26392add45a3be82543f6dd523b76484fb56f"},
- {file = "torchvision-0.8.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:afb76a66b9b0693f758a881a2bf333ed97e3c0c3f15a413c4f49d8dd8bd21307"},
- {file = "torchvision-0.8.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:cd8817e9197fc60ebae37162a445db90bbf35591314a5767ad3d1490b5d65b0f"},
- {file = "torchvision-0.8.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1bd58acc3366ec02266aae56a7a752d43ef07de4a6ba420c4f907d0c9168bb8c"},
- {file = "torchvision-0.8.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:976750a49db2e23dc5a1ed0b5c31f7af51ed2702eee410ee09ef985c3a3e48cf"},
-]
-tqdm = [
- {file = "tqdm-4.61.1-py2.py3-none-any.whl", hash = "sha256:aa0c29f03f298951ac6318f7c8ce584e48fa22ec26396e6411e43d038243bdb2"},
- {file = "tqdm-4.61.1.tar.gz", hash = "sha256:24be966933e942be5f074c29755a95b315c69a91f839a29139bf26ffffe2d3fd"},
-]
-typing-extensions = [
- {file = "typing_extensions-3.7.4.3-py2-none-any.whl", hash = "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f"},
- {file = "typing_extensions-3.7.4.3-py3-none-any.whl", hash = "sha256:7cb407020f00f7bfc3cb3e7881628838e69d8f3fcab2f64742a5e76b2f841918"},
- {file = "typing_extensions-3.7.4.3.tar.gz", hash = "sha256:99d4073b617d30288f569d3f13d2bd7548c3a7e4c8de87db09a9d29bb3a4a60c"},
-]
-urllib3 = [
- {file = "urllib3-1.22-py2.py3-none-any.whl", hash = "sha256:06330f386d6e4b195fbfc736b297f58c5a892e4440e54d294d7004e3a9bbea1b"},
- {file = "urllib3-1.22.tar.gz", hash = "sha256:cc44da8e1145637334317feebd728bd869a35285b93cbb4cca2577da7e62db4f"},
-]
-werkzeug = [
- {file = "Werkzeug-1.0.1-py2.py3-none-any.whl", hash = "sha256:2de2a5db0baeae7b2d2664949077c2ac63fbd16d98da0ff71837f7d1dea3fd43"},
- {file = "Werkzeug-1.0.1.tar.gz", hash = "sha256:6c80b1e5ad3665290ea39320b91e1be1e0d5f60652b964a3070216de83d2e47c"},
-]
-zipp = [
- {file = "zipp-3.4.1-py3-none-any.whl", hash = "sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"},
- {file = "zipp-3.4.1.tar.gz", hash = "sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76"},
-]
diff --git a/cv/detection/yolov3/pytorch/pyproject.toml b/cv/detection/yolov3/pytorch/pyproject.toml
deleted file mode 100644
index 7e3a87f61a79e9d2eec8b89e63bc36e3d2315db7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pyproject.toml
+++ /dev/null
@@ -1,34 +0,0 @@
-[tool.poetry]
-name = "PyTorchYolo"
-version = "1.4.2"
-readme = "README.md"
-repository = "https://github.com/eriklindernoren/PyTorch-YOLOv3"
-description = "Minimal PyTorch implementation of YOLO"
-authors = ["Florian Vahl ", "Erik Linder-Noren "]
-license = "GPL-3.0"
-
-[tool.poetry.dependencies]
-python = ">=3.6.2"
-numpy = "^1.19.5"
-torch = ">=1.0"
-torchvision = "^0.8.2"
-matplotlib = "^3.3.3"
-tensorboard = "^2.4.0"
-terminaltables = "^3.1.0"
-Pillow = "^8.1.0"
-tqdm = "^4.55.1"
-imgaug = "^0.4.0"
-torchsummary = "^1.5.1"
-
-[tool.poetry.dev-dependencies]
-rope = "^0.19.0"
-profilehooks = "^1.12.0"
-
-[build-system]
-requires = ["poetry-core>=1.0.0"]
-build-backend = "poetry.core.masonry.api"
-
-[tool.poetry.scripts]
-yolo-detect = "pytorchyolo.detect:run"
-yolo-train = "pytorchyolo.train:run"
-yolo-test = "pytorchyolo.test:run"
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/__init__.py b/cv/detection/yolov3/pytorch/pytorchyolo/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/detect.py b/cv/detection/yolov3/pytorch/pytorchyolo/detect.py
deleted file mode 100644
index ca59f4f40efa85ccaf56891e536ebf35c2401067..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/detect.py
+++ /dev/null
@@ -1,285 +0,0 @@
-#! /usr/bin/env python3
-
-from __future__ import division
-
-import os
-import argparse
-import tqdm
-import random
-import numpy as np
-
-from PIL import Image
-
-import torch
-import torchvision.transforms as transforms
-from torch.utils.data import DataLoader
-from torch.autograd import Variable
-
-from pytorchyolo.models import load_model
-from pytorchyolo.utils.utils import load_classes, rescale_boxes, non_max_suppression, print_environment_info
-from pytorchyolo.utils.datasets import ImageFolder
-from pytorchyolo.utils.transforms import Resize, DEFAULT_TRANSFORMS
-
-import matplotlib.pyplot as plt
-import matplotlib.patches as patches
-from matplotlib.ticker import NullLocator
-
-
-def detect_directory(model_path, weights_path, img_path, classes, output_path,
- batch_size=8, img_size=416, n_cpu=8, conf_thres=0.5, nms_thres=0.5):
- """Detects objects on all images in specified directory and saves output images with drawn detections.
-
- :param model_path: Path to model definition file (.cfg)
- :type model_path: str
- :param weights_path: Path to weights or checkpoint file (.weights or .pth)
- :type weights_path: str
- :param img_path: Path to directory with images to inference
- :type img_path: str
- :param classes: List of class names
- :type classes: [str]
- :param output_path: Path to output directory
- :type output_path: str
- :param batch_size: Size of each image batch, defaults to 8
- :type batch_size: int, optional
- :param img_size: Size of each image dimension for yolo, defaults to 416
- :type img_size: int, optional
- :param n_cpu: Number of cpu threads to use during batch generation, defaults to 8
- :type n_cpu: int, optional
- :param conf_thres: Object confidence threshold, defaults to 0.5
- :type conf_thres: float, optional
- :param nms_thres: IOU threshold for non-maximum suppression, defaults to 0.5
- :type nms_thres: float, optional
- """
- dataloader = _create_data_loader(img_path, batch_size, img_size, n_cpu)
- model = load_model(model_path, weights_path)
- img_detections, imgs = detect(
- model,
- dataloader,
- output_path,
- img_size,
- conf_thres,
- nms_thres)
- _draw_and_save_output_images(
- img_detections, imgs, img_size, output_path, classes)
-
-
-def detect_image(model, image, img_size=416, conf_thres=0.5, nms_thres=0.5):
- """Inferences one image with model.
-
- :param model: Model for inference
- :type model: models.Darknet
- :param image: Image to inference
- :type image: nd.array
- :param img_size: Size of each image dimension for yolo, defaults to 416
- :type img_size: int, optional
- :param conf_thres: Object confidence threshold, defaults to 0.5
- :type conf_thres: float, optional
- :param nms_thres: IOU threshold for non-maximum suppression, defaults to 0.5
- :type nms_thres: float, optional
- :return: Detections on image with each detection in the format: [x1, y1, x2, y2, confidence, class]
- :rtype: nd.array
- """
- model.eval() # Set model to evaluation mode
-
- # Configure input
- input_img = transforms.Compose([
- DEFAULT_TRANSFORMS,
- Resize(img_size)])(
- (image, np.zeros((1, 5))))[0].unsqueeze(0)
-
- if torch.cuda.is_available():
- input_img = input_img.to("cuda")
-
- # Get detections
- with torch.no_grad():
- detections = model(input_img)
- detections = non_max_suppression(detections, conf_thres, nms_thres)
- detections = rescale_boxes(detections[0], img_size, image.shape[:2])
- return detections.numpy()
-
-
-def detect(model, dataloader, output_path, img_size, conf_thres, nms_thres):
- """Inferences images with model.
-
- :param model: Model for inference
- :type model: models.Darknet
- :param dataloader: Dataloader provides the batches of images to inference
- :type dataloader: DataLoader
- :param output_path: Path to output directory
- :type output_path: str
- :param img_size: Size of each image dimension for yolo, defaults to 416
- :type img_size: int, optional
- :param conf_thres: Object confidence threshold, defaults to 0.5
- :type conf_thres: float, optional
- :param nms_thres: IOU threshold for non-maximum suppression, defaults to 0.5
- :type nms_thres: float, optional
- :return: List of detections. The coordinates are given for the padded image that is provided by the dataloader.
- Use `utils.rescale_boxes` to transform them into the desired input image coordinate system before its transformed by the dataloader),
- List of input image paths
- :rtype: [Tensor], [str]
- """
- # Create output directory, if missing
- os.makedirs(output_path, exist_ok=True)
-
- model.eval() # Set model to evaluation mode
-
- Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
-
- img_detections = [] # Stores detections for each image index
- imgs = [] # Stores image paths
-
- for (img_paths, input_imgs) in tqdm.tqdm(dataloader, desc="Detecting"):
- # Configure input
- input_imgs = Variable(input_imgs.type(Tensor))
-
- # Get detections
- with torch.no_grad():
- detections = model(input_imgs)
- detections = non_max_suppression(detections, conf_thres, nms_thres)
-
- # Store image and detections
- img_detections.extend(detections)
- imgs.extend(img_paths)
- return img_detections, imgs
-
-
-def _draw_and_save_output_images(img_detections, imgs, img_size, output_path, classes):
- """Draws detections in output images and stores them.
-
- :param img_detections: List of detections
- :type img_detections: [Tensor]
- :param imgs: List of paths to image files
- :type imgs: [str]
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param output_path: Path of output directory
- :type output_path: str
- :param classes: List of class names
- :type classes: [str]
- """
-
- # Iterate through images and save plot of detections
- for (image_path, detections) in zip(imgs, img_detections):
- print(f"Image {image_path}:")
- _draw_and_save_output_image(
- image_path, detections, img_size, output_path, classes)
-
-
-def _draw_and_save_output_image(image_path, detections, img_size, output_path, classes):
- """Draws detections in output image and stores this.
-
- :param image_path: Path to input image
- :type image_path: str
- :param detections: List of detections on image
- :type detections: [Tensor]
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param output_path: Path of output directory
- :type output_path: str
- :param classes: List of class names
- :type classes: [str]
- """
- # Create plot
- img = np.array(Image.open(image_path))
- plt.figure()
- fig, ax = plt.subplots(1)
- ax.imshow(img)
- # Rescale boxes to original image
- detections = rescale_boxes(detections, img_size, img.shape[:2])
- unique_labels = detections[:, -1].cpu().unique()
- n_cls_preds = len(unique_labels)
- # Bounding-box colors
- cmap = plt.get_cmap("tab20b")
- colors = [cmap(i) for i in np.linspace(0, 1, n_cls_preds)]
- bbox_colors = random.sample(colors, n_cls_preds)
- for x1, y1, x2, y2, conf, cls_pred in detections:
-
- print(f"\t+ Label: {classes[int(cls_pred)]} | Confidence: {conf.item():0.4f}")
-
- box_w = x2 - x1
- box_h = y2 - y1
-
- color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])]
- # Create a Rectangle patch
- bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=2, edgecolor=color, facecolor="none")
- # Add the bbox to the plot
- ax.add_patch(bbox)
- # Add label
- plt.text(
- x1,
- y1,
- s=classes[int(cls_pred)],
- color="white",
- verticalalignment="top",
- bbox={"color": color, "pad": 0})
-
- # Save generated image with detections
- plt.axis("off")
- plt.gca().xaxis.set_major_locator(NullLocator())
- plt.gca().yaxis.set_major_locator(NullLocator())
- filename = os.path.basename(image_path).split(".")[0]
- output_path = os.path.join(output_path, f"{filename}.png")
- plt.savefig(output_path, bbox_inches="tight", pad_inches=0.0)
- plt.close()
-
-
-def _create_data_loader(img_path, batch_size, img_size, n_cpu):
- """Creates a DataLoader for inferencing.
-
- :param img_path: Path to file containing all paths to validation images.
- :type img_path: str
- :param batch_size: Size of each image batch
- :type batch_size: int
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param n_cpu: Number of cpu threads to use during batch generation
- :type n_cpu: int
- :return: Returns DataLoader
- :rtype: DataLoader
- """
- dataset = ImageFolder(
- img_path,
- transform=transforms.Compose([DEFAULT_TRANSFORMS, Resize(img_size)]))
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- shuffle=False,
- num_workers=n_cpu,
- pin_memory=True)
- return dataloader
-
-
-def run():
- print_environment_info()
- parser = argparse.ArgumentParser(description="Detect objects on images.")
- parser.add_argument("-m", "--model", type=str, default="config/yolov3.cfg", help="Path to model definition file (.cfg)")
- parser.add_argument("-w", "--weights", type=str, default="weights/yolov3.weights", help="Path to weights or checkpoint file (.weights or .pth)")
- parser.add_argument("-i", "--images", type=str, default="data/samples", help="Path to directory with images to inference")
- parser.add_argument("-c", "--classes", type=str, default="data/coco.names", help="Path to classes label file (.names)")
- parser.add_argument("-o", "--output", type=str, default="output", help="Path to output directory")
- parser.add_argument("-b", "--batch_size", type=int, default=1, help="Size of each image batch")
- parser.add_argument("--img_size", type=int, default=416, help="Size of each image dimension for yolo")
- parser.add_argument("--n_cpu", type=int, default=8, help="Number of cpu threads to use during batch generation")
- parser.add_argument("--conf_thres", type=float, default=0.5, help="Object confidence threshold")
- parser.add_argument("--nms_thres", type=float, default=0.4, help="IOU threshold for non-maximum suppression")
- args = parser.parse_args()
- print(f"Command line arguments: {args}")
-
- # Extract class names from file
- classes = load_classes(args.classes) # List of class names
-
- detect_directory(
- args.model,
- args.weights,
- args.images,
- classes,
- args.output,
- batch_size=args.batch_size,
- img_size=args.img_size,
- n_cpu=args.n_cpu,
- conf_thres=args.conf_thres,
- nms_thres=args.nms_thres)
-
-
-if __name__ == '__main__':
- run()
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/finetune.py b/cv/detection/yolov3/pytorch/pytorchyolo/finetune.py
deleted file mode 100644
index bf37afcfa06d65fb7066a0839f7e83325e89a76f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/finetune.py
+++ /dev/null
@@ -1,602 +0,0 @@
-#! /usr/bin/env python3
-
-from __future__ import division
-
-import os
-import argparse
-import tqdm
-import time
-import datetime
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), '../../../..'))
-
-import torch
-
-try:
- from torch.utils.tensorboard import SummaryWriter
-except:
- class SummaryWriter(object):
- def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
- flush_secs=120, filename_suffix=''):
- if not log_dir:
- import socket
- from datetime import datetime
- current_time = datetime.now().strftime('%b%d_%H-%M-%S')
- log_dir = os.path.join(
- 'runs', current_time + '_' + socket.gethostname() + comment)
- self.log_dir = log_dir
- self.purge_step = purge_step
- self.max_queue = max_queue
- self.flush_secs = flush_secs
- self.filename_suffix = filename_suffix
-
- # Initialize the file writers, but they can be cleared out on close
- # and recreated later as needed.
- self.file_writer = self.all_writers = None
- self._get_file_writer()
-
- # Create default bins for histograms, see generate_testdata.py in tensorflow/tensorboard
- v = 1E-12
- buckets = []
- neg_buckets = []
- while v < 1E20:
- buckets.append(v)
- neg_buckets.append(-v)
- v *= 1.1
- self.default_bins = neg_buckets[::-1] + [0] + buckets
-
- def _check_caffe2_blob(self, item): pass
-
- def _get_file_writer(self): pass
-
- def get_logdir(self):
- """Returns the directory where event files will be written."""
- return self.log_dir
-
- def add_hparams(self, hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None): pass
-
- def add_scalar(self, tag, scalar_value, global_step=None, walltime=None, new_style=False): pass
-
- def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None): pass
-
- def add_histogram(self, tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None): pass
-
- def add_histogram_raw(self, tag, min, max, num, sum, sum_squares, bucket_limits, bucket_counts, global_step=None, walltime=None): pass
-
- def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats='CHW'): pass
-
- def add_images(self, tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW'): pass
-
- def add_image_with_boxes(self, tag, img_tensor, box_tensor, global_step=None, walltime=None, rescale=1, dataformats='CHW', labels=None): pass
-
- def add_figure(self, tag, figure, global_step=None, close=True, walltime=None): pass
-
- def add_video(self, tag, vid_tensor, global_step=None, fps=4, walltime=None): pass
-
- def add_audio(self, tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None): pass
-
- def add_text(self, tag, text_string, global_step=None, walltime=None): pass
-
- def add_onnx_graph(self, prototxt): pass
-
- def add_graph(self, model, input_to_model=None, verbose=False): pass
-
- @staticmethod
- def _encode(rawstr): pass
-
- def add_embedding(self, mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None): pass
-
- def add_pr_curve(self, tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_pr_curve_raw(self, tag, true_positive_counts, false_positive_counts, true_negative_counts, false_negative_counts, precision, recall, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_custom_scalars_multilinechart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars_marginchart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars(self, layout): pass
-
- def add_mesh(self, tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None): pass
-
- def flush(self): pass
-
- def close(self): pass
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.close()
-
-
-from torch.utils.data import DataLoader, DistributedSampler
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-import torch.optim as optim
-from torch.cuda import amp
-
-from pytorchyolo.models import load_model
-from pytorchyolo.utils.logger import Logger
-from pytorchyolo.utils.utils import to_cpu, load_classes, print_environment_info, worker_seed_set
-from pytorchyolo.utils.datasets import ListDataset
-from pytorchyolo.utils.augmentations import AUGMENTATION_TRANSFORMS
-# from pytorchyolo.utils.transforms import DEFAULT_TRANSFORMS
-from pytorchyolo.utils.parse_config import parse_data_config
-from pytorchyolo.utils.loss import compute_loss
-from pytorchyolo.test import _evaluate, _create_validation_data_loader
-
-from terminaltables import AsciiTable
-
-from torchsummary import summary
-from common_utils import init_distributed_mode
-
-
-
-train_names = ["module_list.81.conv_81.weight", "module_list.81.conv_81.bias",
- "module_list.93.conv_93.weight", "module_list.93.conv_93.bias",
- "module_list.105.conv_105.weight", "module_list.105.conv_105.bias"]
-
-def _create_data_loader(img_path, batch_size, img_size, n_cpu, multiscale_training=False, distributed=False):
- """Creates a DataLoader for training.
-
- :param img_path: Path to file containing all paths to training images.
- :type img_path: str
- :param batch_size: Size of each image batch
- :type batch_size: int
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param n_cpu: Number of cpu threads to use during batch generation
- :type n_cpu: int
- :param multiscale_training: Scale images to different sizes randomly
- :type multiscale_training: bool
- :return: Returns DataLoader
- :rtype: DataLoader
- """
- dataset = ListDataset(
- img_path,
- img_size=img_size,
- multiscale=multiscale_training,
- transform=AUGMENTATION_TRANSFORMS)
- sampler = None
- shuffle = True
- if distributed:
- sampler = DistributedSampler(dataset, rank=dist.get_rank(), shuffle=True)
- shuffle = False
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- shuffle=shuffle,
- num_workers=n_cpu,
- pin_memory=True,
- collate_fn=dataset.collate_fn,
- worker_init_fn=worker_seed_set,
- sampler=sampler
- )
- return dataloader
-
-
-def run():
- print_environment_info()
- start_time = time.time()
- parser = argparse.ArgumentParser(description="Trains the YOLO model.")
- parser.add_argument("-m", "--model", type=str, default="config/yolov3-voc.cfg", help="Path to model definition file (.cfg)")
- parser.add_argument("-d", "--data", type=str, default="config/voc.data", help="Path to data config file (.data)")
- parser.add_argument("-e", "--epochs", type=int, default=10, help="Number of epochs")
- parser.add_argument("-v", "--verbose", action='store_true', help="Makes the training more verbose")
- parser.add_argument("--n_cpu", type=int, default=0, help="Number of cpu threads to use during batch generation")
- parser.add_argument("--pretrained_weights", type=str, help="Path to checkpoint file (.weights or .pth). Starts training from checkpoint model")
- parser.add_argument("--checkpoint_interval", type=int, default=1, help="Interval of epochs between saving model weights")
- parser.add_argument("--evaluation_interval", type=int, default=1, help="Interval of epochs between evaluations on validation set")
- parser.add_argument("--multiscale_training", action="store_false", help="Allow for multi-scale training")
- parser.add_argument("--iou_thres", type=float, default=0.5, help="Evaluation: IOU threshold required to qualify as detected")
- parser.add_argument("--conf_thres", type=float, default=0.01, help="Evaluation: Object confidence threshold")
- parser.add_argument("--nms_thres", type=float, default=0.4, help="Evaluation: IOU threshold for non-maximum suppression")
- parser.add_argument("--logdir", type=str, default="logs", help="Directory for training log files (e.g. for TensorBoard)")
- parser.add_argument("--second_stage_steps", type=int, default=10, help="Number of second stage training steps(unfreeze all params)")
-
- # distributed training parameters
- parser.add_argument('--local_rank', default=-1, type=int,
- help='Local rank')
- parser.add_argument('--world-size', default=1, type=int,
- help='number of distributed processes')
- parser.add_argument('--dist-url', default='env://', help='url used to set up distributed training')
-
- parser.add_argument("--dist_backend", type=str, default="gloo", help="Distributed training backend.")
-
- parser.add_argument('--amp', action='store_true', default=False, help='use amp to train and test')
- args = parser.parse_args()
-
- args.rank = -1
- init_distributed_mode(args)
- rank = args.rank
-
- print(f"Command line arguments: {args}")
-
- logger = Logger(args.logdir) # Tensorboard logger
-
- # Create output directories if missing
- os.makedirs("output", exist_ok=True)
- os.makedirs("checkpoints", exist_ok=True)
-
- # enable cudnn autotune
- torch.backends.cudnn.benchmark = True
-
- # Get data configuration
- data_config = parse_data_config(args.data)
- train_path = data_config["train"]
- valid_path = data_config["valid"]
- class_names = load_classes(data_config["names"])
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
- # ############
- # Create model
- # ############
-
- model = load_model(args.model, args.pretrained_weights)
- model_module = model
- if args.distributed:
- model = model.to(rank)
- model = DDP(model, device_ids=[args.rank], find_unused_parameters=True)
- model_module = model.module
-
- # Print model
- if args.verbose:
- summary(model_module, input_size=(3, model_module.hyperparams['height'], model_module.hyperparams['height']))
-
- mini_batch_size = model_module.hyperparams['batch'] // model_module.hyperparams['subdivisions']
-
- if dist.is_initialized():
- if dist.get_world_size() >= 8:
- _origin_bs = mini_batch_size
- mini_batch_size = mini_batch_size // 4
- mini_batch_size = max(4, mini_batch_size)
- print(f"WARN: Updating batch size from {_origin_bs} to {mini_batch_size} in per process, avoid non-convergence when training small dataset.")
-
- # #################
- # Create Dataloader
- # #################
-
- # Load training dataloader
- dataloader = _create_data_loader(
- train_path,
- mini_batch_size,
- model_module.hyperparams['height'],
- args.n_cpu,
- args.multiscale_training,
- distributed=args.distributed
- )
-
- # Load validation dataloader
- validation_dataloader = _create_validation_data_loader(
- valid_path,
- mini_batch_size,
- model_module.hyperparams['height'],
- args.n_cpu
- )
-
- # ################
- # Create optimizer
- # ################
-
- params = [p for p in model.parameters() if p.requires_grad]
- print("===== Print trainable parameters =====")
- print("Number of all parameters is {}".format(len(list(model.parameters())))) # 222
- # Should not print anything
- for name, param in model.named_parameters():
- if not param.requires_grad:
- print(name, param.data.shape)
-
- # Freeze backbone network params
- other_names = []
- for name, param in model.named_parameters():
- if rank != -1 and name.startswith('module.'):
- # DDP
- name = name[len('module.'):]
- if name in train_names:
- print(name, param.data.shape)
- else:
- param.requires_grad = False
- other_names.append(name)
- params = [p for p in model.parameters() if p.requires_grad]
-
- if (model_module.hyperparams['optimizer'] in [None, "adam"]):
- optimizer = optim.Adam(
- params,
- lr=model_module.hyperparams['learning_rate'],
- weight_decay=model_module.hyperparams['decay'],
- )
- elif (model_module.hyperparams['optimizer'] == "sgd"):
- optimizer = optim.SGD(
- params,
- lr=model_module.hyperparams['learning_rate'],
- weight_decay=model_module.hyperparams['decay'],
- momentum=model_module.hyperparams['momentum'])
- else:
- print("Unknown optimizer. Please choose between (adam, sgd).")
-
- scaler = amp.GradScaler()
-
- checkpoint_path = None
- # First stage training
- for epoch in range(args.epochs):
-
- print("\n---- Finetuning Model ----")
- epoch_start_time = time.time()
-
- model.train() # Set model to training mode
-
- for batch_i, (img_paths, imgs, targets) in enumerate(tqdm.tqdm(dataloader, desc=f"Training Epoch {epoch}")):
- batches_done = len(dataloader) * epoch + batch_i
-
- imgs = imgs.to(device, non_blocking=True)
- targets = targets.to(device)
-
- # print("len of img_paths = {}".format(len(img_paths)))
- # print(img_paths)
- # print(len(targets))
- # for target in targets:
- # print("{}\t|\t{}".format(target, target.shape))
- # print(1/0)
- if args.amp:
- with amp.autocast():
- outputs = model(imgs)
- loss, loss_components = compute_loss(outputs, targets, model_module)
- scaler.scale(loss).backward()
- else:
- outputs = model(imgs)
-
- loss, loss_components = compute_loss(outputs, targets, model_module)
-
- loss.backward()
-
- ###############
- # Run optimizer
- ###############
-
- if batches_done % model_module.hyperparams['subdivisions'] == 0:
- # Adapt learning rate
- # Get learning rate defined in cfg
- lr = model_module.hyperparams['learning_rate']
- if batches_done < model_module.hyperparams['burn_in']:
- # Burn in
- lr *= (batches_done / model_module.hyperparams['burn_in'])
- else:
- # Set and parse the learning rate to the steps defined in the cfg
- for threshold, value in model_module.hyperparams['lr_steps']:
- if batches_done > threshold:
- lr *= value
- # Log the learning rate
- if rank in [-1, 0]:
- logger.scalar_summary("train/learning_rate", lr, batches_done)
- # Set learning rate
- for g in optimizer.param_groups:
- g['lr'] = lr
-
- # Run optimizer
- if args.amp:
- scaler.unscale_(optimizer)
- torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.)
- scaler.step(optimizer)
- scaler.update()
- else:
- optimizer.step()
- # Reset gradients
- optimizer.zero_grad()
-
- # ############
- # Log progress
- # ############
- if args.verbose and rank in [-1, 0]:
- print(AsciiTable(
- [
- ["Type", "Value"],
- ["IoU loss", float(loss_components[0])],
- ["Object loss", float(loss_components[1])],
- ["Class loss", float(loss_components[2])],
- ["Loss", float(loss_components[3])],
- ["Batch loss", to_cpu(loss).item()],
- ]).table)
-
- # Tensorboard logging
- if rank in [-1, 0]:
- tensorboard_log = [
- ("train/iou_loss", float(loss_components[0])),
- ("train/obj_loss", float(loss_components[1])),
- ("train/class_loss", float(loss_components[2])),
- ("train/loss", to_cpu(loss).item())]
- logger.list_of_scalars_summary(tensorboard_log, batches_done)
-
- model_module.seen += imgs.size(0)
-
- # #############
- # Save progress
- # #############
-
- # Save model to checkpoint file
- if (epoch % args.checkpoint_interval == 0) and (rank in [-1, 0]):
- checkpoint_path = f"checkpoints/yolov3_ckpt_{epoch}.pth"
- print(f"---- Saving checkpoint to: '{checkpoint_path}' ----")
- saved_state_dict = {}
- _state_dict = model.state_dict()
- for k in _state_dict.keys():
- new_k = k
- if k.startswith('module.'):
- new_k = k[len('module.'):]
- saved_state_dict[new_k] = _state_dict[k]
- torch.save(saved_state_dict, checkpoint_path)
- epoch_total_time = time.time() - epoch_start_time
- epoch_total_time_str = str(datetime.timedelta(seconds=int(epoch_total_time)))
-
- fps = len(dataloader) * mini_batch_size / epoch_total_time
- if dist.is_initialized():
- fps = fps * dist.get_world_size()
-
- print('epoch time {}, Total FPS: {}'.format(epoch_total_time_str, fps))
-
- # Unfreeze all params
- # if (checkpoint_path is not None) and (rank == -1):
- # print('Load checkpoint')
- # model = load_model(args.model, checkpoint_path) # Why do we need to restore?
- # for name, param in model.named_parameters():
- # param.requires_grad = True
- #
- # model_module = model
- # if args.distributed:
- # model = DDP(model, device_ids=[args.rank])
- # model_module = model.module
- #
- # print('Resume training')
- # # other_params = []
- # # for name, param in model.named_parameters():
- # # if name in other_names:
- # # param.requires_grad = True
- # # other_params.append(param)
- #
- # # optimizer.param_groups.append({'params': other_params})
- # # params = [p for p in model.parameters() if p.requires_grad]
- # # Reset optimizer
- # optimizer.zero_grad()
- # if torch.cuda.is_available():
- # model.module.cuda()
- # model.train()
- # model_module.train()
- # print(
- # 'model', type(model), '\n',
- # 'model module', type(model_module)
- # )
- # for name, param in model.named_parameters():
- # param.requires_grad = True
- # params = model.parameters()
- # if (model_module.hyperparams['optimizer'] in [None, "adam"]):
- # optimizer = optim.Adam(
- # params,
- # lr=lr,
- # weight_decay=model_module.hyperparams['decay'],
- # )
- # elif (model_module.hyperparams['optimizer'] == "sgd"):
- # optimizer = optim.SGD(
- # params,
- # lr=lr,
- # weight_decay=model_module.hyperparams['decay'],
- # momentum=model_module.hyperparams['momentum'])
- # else:
- # print("Unknown optimizer. Please choose between (adam, sgd).")
-
- # # Second stage training
- # epoch += 1
- # dist.barrier()
- # dataloader_iter = iter(dataloader)
- # for batch_i in tqdm.tqdm(range(args.second_stage_steps), desc=f"Training Epoch {epoch}"):
- # (img_paths, imgs, targets) = next(dataloader_iter)
- # # for batch_i, (img_paths, imgs, targets) in enumerate(tqdm.tqdm(dataloader, desc=f"Training Epoch {epoch}")):
- # batches_done = len(dataloader) * epoch + batch_i
- #
- # imgs = imgs.to(device, non_blocking=True)
- # targets = targets.to(device)
- #
- # outputs = model(imgs)
- #
- # loss, loss_components = compute_loss(outputs, targets, model_module)
- #
- # loss.backward()
- #
- # ###############
- # # Run optimizer
- # ###############
- #
- # if batches_done % model_module.hyperparams['subdivisions'] == 0:
- # # Adapt learning rate
- # # Get learning rate defined in cfg
- # lr = model_module.hyperparams['learning_rate']
- # if batches_done < model_module.hyperparams['burn_in']:
- # # Burn in
- # lr *= (batches_done / model_module.hyperparams['burn_in'])
- # else:
- # # Set and parse the learning rate to the steps defined in the cfg
- # for threshold, value in model_module.hyperparams['lr_steps']:
- # if batches_done > threshold:
- # lr *= value
- # # Log the learning rate
- # logger.scalar_summary("train/learning_rate", lr, batches_done)
- # # Set learning rate
- # for g in optimizer.param_groups:
- # g['lr'] = lr
- # g['lr'] = 3e-7
- # # Run optimizer
- # optimizer.step()
- # # Reset gradients
- # optimizer.zero_grad()
- #
- # # ############
- # # Log progress
- # # ############
- # if args.verbose:
- # print(AsciiTable(
- # [
- # ["Type", "Value"],
- # ["IoU loss", float(loss_components[0])],
- # ["Object loss", float(loss_components[1])],
- # ["Class loss", float(loss_components[2])],
- # ["Loss", float(loss_components[3])],
- # ["Batch loss", to_cpu(loss).item()],
- # ]).table)
- #
- # # Tensorboard logging
- # tensorboard_log = [
- # ("train/iou_loss", float(loss_components[0])),
- # ("train/obj_loss", float(loss_components[1])),
- # ("train/class_loss", float(loss_components[2])),
- # ("train/loss", to_cpu(loss).item())]
- # logger.list_of_scalars_summary(tensorboard_log, batches_done)
- #
- # model_module.seen += imgs.size(0)
-
- # #############
- # Save progress
- # #############
-
- # # Save model to checkpoint file
- # if epoch % args.checkpoint_interval == 0:
- # checkpoint_path = f"checkpoints/yolov3_ckpt_{epoch}.pth"
- # print(f"---- Saving checkpoint to: '{checkpoint_path}' ----")
- # torch.save(model_module.state_dict(), checkpoint_path)
-
- # ########
- # Evaluate
- # ########
-
- print("\n---- Evaluating Model ----")
- # Evaluate the model on the validation set
- metrics_output = _evaluate(
- model_module,
- validation_dataloader,
- class_names,
- img_size=model_module.hyperparams['height'],
- iou_thres=args.iou_thres,
- conf_thres=args.conf_thres,
- nms_thres=args.nms_thres,
- verbose=True
- )
-
- if (metrics_output is not None) and (rank in [-1, 0]):
- precision, recall, AP, f1, ap_class = metrics_output
- evaluation_metrics = [
- ("validation/precision", precision.mean()),
- ("validation/recall", recall.mean()),
- ("validation/mAP", AP.mean()),
- ("validation/f1", f1.mean())]
- logger.list_of_scalars_summary(evaluation_metrics, epoch)
- with open("train.logs", 'a') as f:
- f.write("epoch = {}\n".format(epoch))
- f.write("mAP = {}\n".format(AP.mean()))
- f.write("AP = \n")
- for elem in AP:
- f.write("{}\n".format(elem))
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-if __name__ == "__main__":
- run()
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/models.py b/cv/detection/yolov3/pytorch/pytorchyolo/models.py
deleted file mode 100644
index 4413ffa061fd242daf4005deaa4033ea06bd0a9e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/models.py
+++ /dev/null
@@ -1,311 +0,0 @@
-from __future__ import division
-from itertools import chain
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import numpy as np
-
-from pytorchyolo.utils.parse_config import parse_model_config
-from pytorchyolo.utils.utils import weights_init_normal
-
-
-def create_modules(module_defs):
- """
- Constructs module list of layer blocks from module configuration in module_defs
- """
- hyperparams = module_defs.pop(0)
- hyperparams.update({
- 'batch': int(hyperparams['batch']),
- 'subdivisions': int(hyperparams['subdivisions']),
- 'width': int(hyperparams['width']),
- 'height': int(hyperparams['height']),
- 'channels': int(hyperparams['channels']),
- 'optimizer': hyperparams.get('optimizer'),
- 'momentum': float(hyperparams['momentum']),
- 'decay': float(hyperparams['decay']),
- 'learning_rate': float(hyperparams['learning_rate']),
- 'burn_in': int(hyperparams['burn_in']),
- 'max_batches': int(hyperparams['max_batches']),
- 'policy': hyperparams['policy'],
- 'lr_steps': list(zip(map(int, hyperparams["steps"].split(",")),
- map(float, hyperparams["scales"].split(","))))
- })
- assert hyperparams["height"] == hyperparams["width"], \
- "Height and width should be equal! Non square images are padded with zeros."
- output_filters = [hyperparams["channels"]]
- module_list = nn.ModuleList()
- for module_i, module_def in enumerate(module_defs):
- modules = nn.Sequential()
-
- if module_def["type"] == "convolutional":
- bn = int(module_def["batch_normalize"])
- filters = int(module_def["filters"])
- kernel_size = int(module_def["size"])
- pad = (kernel_size - 1) // 2
- modules.add_module(
- f"conv_{module_i}",
- nn.Conv2d(
- in_channels=output_filters[-1],
- out_channels=filters,
- kernel_size=kernel_size,
- stride=int(module_def["stride"]),
- padding=pad,
- bias=not bn,
- ),
- )
- if bn:
- modules.add_module(f"batch_norm_{module_i}",
- nn.BatchNorm2d(filters, momentum=0.9, eps=1e-5))
- if module_def["activation"] == "leaky":
- modules.add_module(f"leaky_{module_i}", nn.LeakyReLU(0.1))
- if module_def["activation"] == "mish":
- modules.add_module(f"mish_{module_i}", Mish())
-
- elif module_def["type"] == "maxpool":
- kernel_size = int(module_def["size"])
- stride = int(module_def["stride"])
- if kernel_size == 2 and stride == 1:
- modules.add_module(f"_debug_padding_{module_i}", nn.ZeroPad2d((0, 1, 0, 1)))
- maxpool = nn.MaxPool2d(kernel_size=kernel_size, stride=stride,
- padding=int((kernel_size - 1) // 2))
- modules.add_module(f"maxpool_{module_i}", maxpool)
-
- elif module_def["type"] == "upsample":
- upsample = Upsample(scale_factor=int(module_def["stride"]), mode="nearest")
- modules.add_module(f"upsample_{module_i}", upsample)
-
- elif module_def["type"] == "route":
- layers = [int(x) for x in module_def["layers"].split(",")]
- filters = sum([output_filters[1:][i] for i in layers]) // int(module_def.get("groups", 1))
- modules.add_module(f"route_{module_i}", nn.Sequential())
-
- elif module_def["type"] == "shortcut":
- filters = output_filters[1:][int(module_def["from"])]
- modules.add_module(f"shortcut_{module_i}", nn.Sequential())
-
- elif module_def["type"] == "yolo":
- anchor_idxs = [int(x) for x in module_def["mask"].split(",")]
- # Extract anchors
- anchors = [int(x) for x in module_def["anchors"].split(",")]
- anchors = [(anchors[i], anchors[i + 1]) for i in range(0, len(anchors), 2)]
- anchors = [anchors[i] for i in anchor_idxs]
- num_classes = int(module_def["classes"])
- # Define detection layer
- yolo_layer = YOLOLayer(anchors, num_classes)
- modules.add_module(f"yolo_{module_i}", yolo_layer)
- # Register module list and number of output filters
- module_list.append(modules)
- output_filters.append(filters)
-
- return hyperparams, module_list
-
-
-class Upsample(nn.Module):
- """ nn.Upsample is deprecated """
-
- def __init__(self, scale_factor, mode="nearest"):
- super(Upsample, self).__init__()
- self.scale_factor = scale_factor
- self.mode = mode
-
- def forward(self, x):
- x = F.interpolate(x, scale_factor=self.scale_factor, mode=self.mode)
- return x
-
-class Mish(nn.Module):
- """ The MISH activation function (https://github.com/digantamisra98/Mish) """
-
- def __init__(self):
- super(Mish, self).__init__()
-
- def forward(self, x):
- return x * torch.tanh(F.softplus(x))
-
-class YOLOLayer(nn.Module):
- """Detection layer"""
-
- def __init__(self, anchors, num_classes):
- super(YOLOLayer, self).__init__()
- self.num_anchors = len(anchors)
- self.num_classes = num_classes
- self.mse_loss = nn.MSELoss()
- self.bce_loss = nn.BCELoss()
- self.no = num_classes + 5 # number of outputs per anchor
- self.grid = torch.zeros(1) # TODO
-
- anchors = torch.tensor(list(chain(*anchors))).float().view(-1, 2)
- self.register_buffer('anchors', anchors)
- self.register_buffer(
- 'anchor_grid', anchors.clone().view(1, -1, 1, 1, 2))
- self.stride = None
-
- def forward(self, x, img_size):
- stride = img_size // x.size(2)
- self.stride = stride
- bs, _, ny, nx = x.shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x = x.view(bs, self.num_anchors, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid.shape[2:4] != x.shape[2:4]:
- self.grid = self._make_grid(nx, ny).to(x.device)
-
- x[..., 0:2] = (x[..., 0:2].sigmoid() + self.grid) * stride # xy
- x[..., 2:4] = torch.exp(x[..., 2:4]) * self.anchor_grid # wh
- x[..., 4:] = x[..., 4:].sigmoid()
- x = x.view(bs, -1, self.no)
-
- return x
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class Darknet(nn.Module):
- """YOLOv3 object detection model"""
-
- def __init__(self, config_path):
- super(Darknet, self).__init__()
- self.module_defs = parse_model_config(config_path)
- self.hyperparams, self.module_list = create_modules(self.module_defs)
- self.yolo_layers = [layer[0]
- for layer in self.module_list if isinstance(layer[0], YOLOLayer)]
- self.seen = 0
- self.header_info = np.array([0, 0, 0, self.seen, 0], dtype=np.int32)
-
- def forward(self, x):
- img_size = x.size(2)
- layer_outputs, yolo_outputs = [], []
- for i, (module_def, module) in enumerate(zip(self.module_defs, self.module_list)):
- if module_def["type"] in ["convolutional", "upsample", "maxpool"]:
- x = module(x)
- elif module_def["type"] == "route":
- combined_outputs = torch.cat([layer_outputs[int(layer_i)] for layer_i in module_def["layers"].split(",")], 1)
- group_size = combined_outputs.shape[1] // int(module_def.get("groups", 1))
- group_id = int(module_def.get("group_id", 0))
- x = combined_outputs[:, group_size * group_id : group_size * (group_id + 1)] # Slice groupings used by yolo v4
- elif module_def["type"] == "shortcut":
- layer_i = int(module_def["from"])
- x = layer_outputs[-1] + layer_outputs[layer_i]
- elif module_def["type"] == "yolo":
- x = module[0](x, img_size)
- yolo_outputs.append(x)
- layer_outputs.append(x)
- return yolo_outputs if self.training else torch.cat(yolo_outputs, 1)
-
- def load_darknet_weights(self, weights_path):
- """Parses and loads the weights stored in 'weights_path'"""
-
- # Open the weights file
- with open(weights_path, "rb") as f:
- # First five are header values
- header = np.fromfile(f, dtype=np.int32, count=5)
- self.header_info = header # Needed to write header when saving weights
- self.seen = header[3] # number of images seen during training
- weights = np.fromfile(f, dtype=np.float32) # The rest are weights
-
- # Establish cutoff for loading backbone weights
- cutoff = None
- if "darknet53.conv.74" in weights_path:
- cutoff = 75
-
- ptr = 0
- for i, (module_def, module) in enumerate(zip(self.module_defs, self.module_list)):
- if i == cutoff:
- break
- if module_def["type"] == "convolutional":
- conv_layer = module[0]
- if module_def["batch_normalize"]:
- # Load BN bias, weights, running mean and running variance
- bn_layer = module[1]
- num_b = bn_layer.bias.numel() # Number of biases
- # Bias
- bn_b = torch.from_numpy(
- weights[ptr: ptr + num_b]).view_as(bn_layer.bias)
- bn_layer.bias.data.copy_(bn_b)
- ptr += num_b
- # Weight
- bn_w = torch.from_numpy(
- weights[ptr: ptr + num_b]).view_as(bn_layer.weight)
- bn_layer.weight.data.copy_(bn_w)
- ptr += num_b
- # Running Mean
- bn_rm = torch.from_numpy(
- weights[ptr: ptr + num_b]).view_as(bn_layer.running_mean)
- bn_layer.running_mean.data.copy_(bn_rm)
- ptr += num_b
- # Running Var
- bn_rv = torch.from_numpy(
- weights[ptr: ptr + num_b]).view_as(bn_layer.running_var)
- bn_layer.running_var.data.copy_(bn_rv)
- ptr += num_b
- else:
- # Load conv. bias
- num_b = conv_layer.bias.numel()
- conv_b = torch.from_numpy(
- weights[ptr: ptr + num_b]).view_as(conv_layer.bias)
- conv_layer.bias.data.copy_(conv_b)
- ptr += num_b
- # Load conv. weights
- num_w = conv_layer.weight.numel()
- conv_w = torch.from_numpy(
- weights[ptr: ptr + num_w]).view_as(conv_layer.weight)
- conv_layer.weight.data.copy_(conv_w)
- ptr += num_w
-
- def save_darknet_weights(self, path, cutoff=-1):
- """
- @:param path - path of the new weights file
- @:param cutoff - save layers between 0 and cutoff (cutoff = -1 -> all are saved)
- """
- fp = open(path, "wb")
- self.header_info[3] = self.seen
- self.header_info.tofile(fp)
-
- # Iterate through layers
- for i, (module_def, module) in enumerate(zip(self.module_defs[:cutoff], self.module_list[:cutoff])):
- if module_def["type"] == "convolutional":
- conv_layer = module[0]
- # If batch norm, load bn first
- if module_def["batch_normalize"]:
- bn_layer = module[1]
- bn_layer.bias.data.cpu().numpy().tofile(fp)
- bn_layer.weight.data.cpu().numpy().tofile(fp)
- bn_layer.running_mean.data.cpu().numpy().tofile(fp)
- bn_layer.running_var.data.cpu().numpy().tofile(fp)
- # Load conv bias
- else:
- conv_layer.bias.data.cpu().numpy().tofile(fp)
- # Load conv weights
- conv_layer.weight.data.cpu().numpy().tofile(fp)
-
- fp.close()
-
-
-def load_model(model_path, weights_path=None):
- """Loads the yolo model from file.
-
- :param model_path: Path to model definition file (.cfg)
- :type model_path: str
- :param weights_path: Path to weights or checkpoint file (.weights or .pth)
- :type weights_path: str
- :return: Returns model
- :rtype: Darknet
- """
- device = torch.device("cuda" if torch.cuda.is_available()
- else "cpu") # Select device for inference
- model = Darknet(model_path).to(device)
-
- model.apply(weights_init_normal)
-
- # If pretrained weights are specified, start from checkpoint or weight file
- if weights_path:
- if weights_path.endswith(".pth"):
- # Load checkpoint weights
- model.load_state_dict(torch.load(weights_path, map_location=device))
- else:
- # Load darknet weights
- model.load_darknet_weights(weights_path)
- return model
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/test.py b/cv/detection/yolov3/pytorch/pytorchyolo/test.py
deleted file mode 100644
index f281d3279413efb86876c0b43953af1921e5fbc6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/test.py
+++ /dev/null
@@ -1,297 +0,0 @@
-#! /usr/bin/env python3
-
-from __future__ import division
-
-import argparse
-import tqdm
-import numpy as np
-
-from terminaltables import AsciiTable
-
-import torch
-
-try:
- from torch.utils.tensorboard import SummaryWriter
-except:
- class SummaryWriter(object):
- def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
- flush_secs=120, filename_suffix=''):
- if not log_dir:
- import socket
- from datetime import datetime
- current_time = datetime.now().strftime('%b%d_%H-%M-%S')
- log_dir = os.path.join(
- 'runs', current_time + '_' + socket.gethostname() + comment)
- self.log_dir = log_dir
- self.purge_step = purge_step
- self.max_queue = max_queue
- self.flush_secs = flush_secs
- self.filename_suffix = filename_suffix
-
- # Initialize the file writers, but they can be cleared out on close
- # and recreated later as needed.
- self.file_writer = self.all_writers = None
- self._get_file_writer()
-
- # Create default bins for histograms, see generate_testdata.py in tensorflow/tensorboard
- v = 1E-12
- buckets = []
- neg_buckets = []
- while v < 1E20:
- buckets.append(v)
- neg_buckets.append(-v)
- v *= 1.1
- self.default_bins = neg_buckets[::-1] + [0] + buckets
-
- def _check_caffe2_blob(self, item): pass
-
- def _get_file_writer(self): pass
-
- def get_logdir(self):
- """Returns the directory where event files will be written."""
- return self.log_dir
-
- def add_hparams(self, hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None): pass
-
- def add_scalar(self, tag, scalar_value, global_step=None, walltime=None, new_style=False): pass
-
- def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None): pass
-
- def add_histogram(self, tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None): pass
-
- def add_histogram_raw(self, tag, min, max, num, sum, sum_squares, bucket_limits, bucket_counts, global_step=None, walltime=None): pass
-
- def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats='CHW'): pass
-
- def add_images(self, tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW'): pass
-
- def add_image_with_boxes(self, tag, img_tensor, box_tensor, global_step=None, walltime=None, rescale=1, dataformats='CHW', labels=None): pass
-
- def add_figure(self, tag, figure, global_step=None, close=True, walltime=None): pass
-
- def add_video(self, tag, vid_tensor, global_step=None, fps=4, walltime=None): pass
-
- def add_audio(self, tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None): pass
-
- def add_text(self, tag, text_string, global_step=None, walltime=None): pass
-
- def add_onnx_graph(self, prototxt): pass
-
- def add_graph(self, model, input_to_model=None, verbose=False): pass
-
- @staticmethod
- def _encode(rawstr): pass
-
- def add_embedding(self, mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None): pass
-
- def add_pr_curve(self, tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_pr_curve_raw(self, tag, true_positive_counts, false_positive_counts, true_negative_counts, false_negative_counts, precision, recall, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_custom_scalars_multilinechart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars_marginchart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars(self, layout): pass
-
- def add_mesh(self, tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None): pass
-
- def flush(self): pass
-
- def close(self): pass
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.close()
-
-
-from torch.utils.data import DataLoader
-from torch.autograd import Variable
-
-from pytorchyolo.models import load_model
-from pytorchyolo.utils.utils import load_classes, ap_per_class, get_batch_statistics, non_max_suppression, to_cpu, xywh2xyxy, print_environment_info
-from pytorchyolo.utils.datasets import ListDataset
-from pytorchyolo.utils.transforms import DEFAULT_TRANSFORMS
-from pytorchyolo.utils.parse_config import parse_data_config
-
-
-def evaluate_model_file(model_path, weights_path, img_path, class_names, batch_size=8, img_size=416,
- n_cpu=8, iou_thres=0.5, conf_thres=0.5, nms_thres=0.5, verbose=True):
- """Evaluate model on validation dataset.
-
- :param model_path: Path to model definition file (.cfg)
- :type model_path: str
- :param weights_path: Path to weights or checkpoint file (.weights or .pth)
- :type weights_path: str
- :param img_path: Path to file containing all paths to validation images.
- :type img_path: str
- :param class_names: List of class names
- :type class_names: [str]
- :param batch_size: Size of each image batch, defaults to 8
- :type batch_size: int, optional
- :param img_size: Size of each image dimension for yolo, defaults to 416
- :type img_size: int, optional
- :param n_cpu: Number of cpu threads to use during batch generation, defaults to 8
- :type n_cpu: int, optional
- :param iou_thres: IOU threshold required to qualify as detected, defaults to 0.5
- :type iou_thres: float, optional
- :param conf_thres: Object confidence threshold, defaults to 0.5
- :type conf_thres: float, optional
- :param nms_thres: IOU threshold for non-maximum suppression, defaults to 0.5
- :type nms_thres: float, optional
- :param verbose: If True, prints stats of model, defaults to True
- :type verbose: bool, optional
- :return: Returns precision, recall, AP, f1, ap_class
- """
- dataloader = _create_validation_data_loader(
- img_path, batch_size, img_size, n_cpu)
- model = load_model(model_path, weights_path)
- metrics_output = _evaluate(
- model,
- dataloader,
- class_names,
- img_size,
- iou_thres,
- conf_thres,
- nms_thres,
- verbose)
- return metrics_output
-
-
-def print_eval_stats(metrics_output, class_names, verbose):
- if metrics_output is not None:
- precision, recall, AP, f1, ap_class = metrics_output
- if verbose:
- # Prints class AP and mean AP
- ap_table = [["Index", "Class", "AP"]]
- for i, c in enumerate(ap_class):
- ap_table += [[c, class_names[c], "%.5f" % AP[i]]]
- print(AsciiTable(ap_table).table)
- print(f"---- mAP {AP.mean():.5f} ----")
- else:
- print("---- mAP not measured (no detections found by model) ----")
-
-
-def _evaluate(model, dataloader, class_names, img_size, iou_thres, conf_thres, nms_thres, verbose):
- """Evaluate model on validation dataset.
-
- :param model: Model to evaluate
- :type model: models.Darknet
- :param dataloader: Dataloader provides the batches of images with targets
- :type dataloader: DataLoader
- :param class_names: List of class names
- :type class_names: [str]
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param iou_thres: IOU threshold required to qualify as detected
- :type iou_thres: float
- :param conf_thres: Object confidence threshold
- :type conf_thres: float
- :param nms_thres: IOU threshold for non-maximum suppression
- :type nms_thres: float
- :param verbose: If True, prints stats of model
- :type verbose: bool
- :return: Returns precision, recall, AP, f1, ap_class
- """
- model.eval() # Set model to evaluation mode
-
- Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
-
- labels = []
- sample_metrics = [] # List of tuples (TP, confs, pred)
- for _, imgs, targets in tqdm.tqdm(dataloader, desc="Validating"):
- # Extract labels
- labels += targets[:, 1].tolist()
- # Rescale target
- targets[:, 2:] = xywh2xyxy(targets[:, 2:])
- targets[:, 2:] *= img_size
-
- imgs = Variable(imgs.type(Tensor), requires_grad=False)
-
- with torch.no_grad():
- outputs = model(imgs)
- outputs = non_max_suppression(outputs, conf_thres=conf_thres, iou_thres=nms_thres)
-
- sample_metrics += get_batch_statistics(outputs, targets, iou_threshold=iou_thres)
-
- if len(sample_metrics) == 0: # No detections over whole validation set.
- print("---- No detections over whole validation set ----")
- return None
-
- # Concatenate sample statistics
- true_positives, pred_scores, pred_labels = [
- np.concatenate(x, 0) for x in list(zip(*sample_metrics))]
- metrics_output = ap_per_class(
- true_positives, pred_scores, pred_labels, labels)
-
- print_eval_stats(metrics_output, class_names, verbose)
-
- return metrics_output
-
-
-def _create_validation_data_loader(img_path, batch_size, img_size, n_cpu):
- """
- Creates a DataLoader for validation.
-
- :param img_path: Path to file containing all paths to validation images.
- :type img_path: str
- :param batch_size: Size of each image batch
- :type batch_size: int
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param n_cpu: Number of cpu threads to use during batch generation
- :type n_cpu: int
- :return: Returns DataLoader
- :rtype: DataLoader
- """
- dataset = ListDataset(img_path, img_size=img_size, multiscale=False, transform=DEFAULT_TRANSFORMS)
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- shuffle=False,
- num_workers=n_cpu,
- pin_memory=True,
- collate_fn=dataset.collate_fn)
- return dataloader
-
-
-def run():
- print_environment_info()
- parser = argparse.ArgumentParser(description="Evaluate validation data.")
- parser.add_argument("-m", "--model", type=str, default="config/yolov3.cfg", help="Path to model definition file (.cfg)")
- parser.add_argument("-w", "--weights", type=str, default="weights/yolov3.weights", help="Path to weights or checkpoint file (.weights or .pth)")
- parser.add_argument("-d", "--data", type=str, default="config/coco.data", help="Path to data config file (.data)")
- parser.add_argument("-b", "--batch_size", type=int, default=8, help="Size of each image batch")
- parser.add_argument("-v", "--verbose", action='store_true', help="Makes the validation more verbose")
- parser.add_argument("--img_size", type=int, default=416, help="Size of each image dimension for yolo")
- parser.add_argument("--n_cpu", type=int, default=8, help="Number of cpu threads to use during batch generation")
- parser.add_argument("--iou_thres", type=float, default=0.5, help="IOU threshold required to qualify as detected")
- parser.add_argument("--conf_thres", type=float, default=0.01, help="Object confidence threshold")
- parser.add_argument("--nms_thres", type=float, default=0.4, help="IOU threshold for non-maximum suppression")
- args = parser.parse_args()
- print(f"Command line arguments: {args}")
-
- # Load configuration from data file
- data_config = parse_data_config(args.data)
- # Path to file containing all images for validation
- valid_path = data_config["valid"]
- class_names = load_classes(data_config["names"]) # List of class names
-
- precision, recall, AP, f1, ap_class = evaluate_model_file(
- args.model,
- args.weights,
- valid_path,
- class_names,
- batch_size=args.batch_size,
- img_size=args.img_size,
- n_cpu=args.n_cpu,
- iou_thres=args.iou_thres,
- conf_thres=args.conf_thres,
- nms_thres=args.nms_thres,
- verbose=True)
-
-
-if __name__ == "__main__":
- run()
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/train.py b/cv/detection/yolov3/pytorch/pytorchyolo/train.py
deleted file mode 100644
index 831253cdb98ab071f3a47e11b4842cc1bab814e6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/train.py
+++ /dev/null
@@ -1,422 +0,0 @@
-#! /usr/bin/env python3
-
-from __future__ import division
-
-import os
-import argparse
-import tqdm
-import datetime
-import time
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), '../../../..'))
-
-import torch
-
-try:
- from torch.utils.tensorboard import SummaryWriter
-except:
- class SummaryWriter(object):
- def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
- flush_secs=120, filename_suffix=''):
- if not log_dir:
- import socket
- from datetime import datetime
- current_time = datetime.now().strftime('%b%d_%H-%M-%S')
- log_dir = os.path.join(
- 'runs', current_time + '_' + socket.gethostname() + comment)
- self.log_dir = log_dir
- self.purge_step = purge_step
- self.max_queue = max_queue
- self.flush_secs = flush_secs
- self.filename_suffix = filename_suffix
-
- # Initialize the file writers, but they can be cleared out on close
- # and recreated later as needed.
- self.file_writer = self.all_writers = None
- self._get_file_writer()
-
- # Create default bins for histograms, see generate_testdata.py in tensorflow/tensorboard
- v = 1E-12
- buckets = []
- neg_buckets = []
- while v < 1E20:
- buckets.append(v)
- neg_buckets.append(-v)
- v *= 1.1
- self.default_bins = neg_buckets[::-1] + [0] + buckets
-
- def _check_caffe2_blob(self, item): pass
-
- def _get_file_writer(self): pass
-
- def get_logdir(self):
- """Returns the directory where event files will be written."""
- return self.log_dir
-
- def add_hparams(self, hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None): pass
-
- def add_scalar(self, tag, scalar_value, global_step=None, walltime=None, new_style=False): pass
-
- def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None): pass
-
- def add_histogram(self, tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None): pass
-
- def add_histogram_raw(self, tag, min, max, num, sum, sum_squares, bucket_limits, bucket_counts, global_step=None, walltime=None): pass
-
- def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats='CHW'): pass
-
- def add_images(self, tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW'): pass
-
- def add_image_with_boxes(self, tag, img_tensor, box_tensor, global_step=None, walltime=None, rescale=1, dataformats='CHW', labels=None): pass
-
- def add_figure(self, tag, figure, global_step=None, close=True, walltime=None): pass
-
- def add_video(self, tag, vid_tensor, global_step=None, fps=4, walltime=None): pass
-
- def add_audio(self, tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None): pass
-
- def add_text(self, tag, text_string, global_step=None, walltime=None): pass
-
- def add_onnx_graph(self, prototxt): pass
-
- def add_graph(self, model, input_to_model=None, verbose=False): pass
-
- @staticmethod
- def _encode(rawstr): pass
-
- def add_embedding(self, mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None): pass
-
- def add_pr_curve(self, tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_pr_curve_raw(self, tag, true_positive_counts, false_positive_counts, true_negative_counts, false_negative_counts, precision, recall, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_custom_scalars_multilinechart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars_marginchart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars(self, layout): pass
-
- def add_mesh(self, tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None): pass
-
- def flush(self): pass
-
- def close(self): pass
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.close()
-
-from torch.utils.data import DataLoader, DistributedSampler
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-import torch.optim as optim
-
-from pytorchyolo.models import load_model
-from pytorchyolo.utils.logger import Logger
-from pytorchyolo.utils.utils import to_cpu, load_classes, print_environment_info, provide_determinism, worker_seed_set
-from pytorchyolo.utils.datasets import ListDataset
-from pytorchyolo.utils.augmentations import AUGMENTATION_TRANSFORMS
-# from pytorchyolo.utils.transforms import DEFAULT_TRANSFORMS
-from pytorchyolo.utils.parse_config import parse_data_config
-from pytorchyolo.utils.loss import compute_loss
-from pytorchyolo.test import _evaluate, _create_validation_data_loader
-
-from terminaltables import AsciiTable
-
-from torchsummary import summary
-
-
-def setup_for_distributed(is_master):
- """
- This function disables printing when not in master process
- """
- import builtins as __builtin__
- builtin_print = __builtin__.print
-
- def print(*args, **kwargs):
- force = kwargs.pop('force', False)
- if is_master or force:
- builtin_print(*args, **kwargs)
-
- __builtin__.print = print
-
-
-def _create_data_loader(img_path, batch_size, img_size, n_cpu, multiscale_training=False, distributed=False):
- """Creates a DataLoader for training.
-
- :param img_path: Path to file containing all paths to training images.
- :type img_path: str
- :param batch_size: Size of each image batch
- :type batch_size: int
- :param img_size: Size of each image dimension for yolo
- :type img_size: int
- :param n_cpu: Number of cpu threads to use during batch generation
- :type n_cpu: int
- :param multiscale_training: Scale images to different sizes randomly
- :type multiscale_training: bool
- :return: Returns DataLoader
- :rtype: DataLoader
- """
- dataset = ListDataset(
- img_path,
- img_size=img_size,
- multiscale=multiscale_training,
- transform=AUGMENTATION_TRANSFORMS)
- sampler = None
- if distributed:
- sampler = DistributedSampler(dataset, rank=dist.get_rank(), shuffle=True)
- dataloader = DataLoader(
- dataset,
- batch_size=batch_size,
- shuffle=True,
- num_workers=n_cpu,
- pin_memory=True,
- collate_fn=dataset.collate_fn,
- worker_init_fn=worker_seed_set,
- sampler=sampler
- )
- return dataloader
-
-
-def run():
- print_environment_info()
- start_time = time.time()
- parser = argparse.ArgumentParser(description="Trains the YOLO model.")
- parser.add_argument("-m", "--model", type=str, default="config/yolov3.cfg", help="Path to model definition file (.cfg)")
- parser.add_argument("-d", "--data", type=str, default="config/coco.data", help="Path to data config file (.data)")
- parser.add_argument("-e", "--epochs", type=int, default=300, help="Number of epochs")
- parser.add_argument("-v", "--verbose", action='store_true', help="Makes the training more verbose")
- parser.add_argument("--n_cpu", type=int, default=8, help="Number of cpu threads to use during batch generation")
- parser.add_argument("--pretrained_weights", type=str, help="Path to checkpoint file (.weights or .pth). Starts training from checkpoint model")
- parser.add_argument("--checkpoint_interval", type=int, default=1, help="Interval of epochs between saving model weights")
- parser.add_argument("--evaluation_interval", type=int, default=1, help="Interval of epochs between evaluations on validation set")
- parser.add_argument("--multiscale_training", action="store_false", help="Allow for multi-scale training")
- parser.add_argument("--iou_thres", type=float, default=0.5, help="Evaluation: IOU threshold required to qualify as detected")
- parser.add_argument("--conf_thres", type=float, default=0.1, help="Evaluation: Object confidence threshold")
- parser.add_argument("--nms_thres", type=float, default=0.5, help="Evaluation: IOU threshold for non-maximum suppression")
- parser.add_argument("--logdir", type=str, default="logs", help="Directory for training log files (e.g. for TensorBoard)")
- parser.add_argument("--seed", type=int, default=-1, help="Makes results reproducable. Set -1 to disable.")
-
- parser.add_argument("--local_rank", type=int, default=-1, help="Local rank.")
- parser.add_argument("--dist_backend", type=str, default="gloo", help="Distributed training backend.")
- args = parser.parse_args()
- rank = args.local_rank
- args.distributed = rank != -1
- args.rank = rank
- if args.distributed:
- dist_backend = args.dist_backend
- DIST_BACKEND_ENV = "PT_DIST_BACKEND"
- if DIST_BACKEND_ENV in os.environ:
- print("WARN: Use the distributed backend of the environment.")
- dist_backend = os.environ[DIST_BACKEND_ENV]
- dist.init_process_group(backend=dist_backend, rank=args.rank)
- setup_for_distributed(args.rank == 0)
- torch.cuda.set_device(args.rank)
-
- print('CUDA_VISIBLE_DEVICES=', list(range(torch.cuda.device_count())))
- print(f"Command line arguments: {args}")
-
- if args.seed != -1:
- provide_determinism(args.seed)
-
- logger = Logger(args.logdir) # Tensorboard logger
-
- # Create output directories if missing
- os.makedirs("output", exist_ok=True)
- os.makedirs("checkpoints", exist_ok=True)
-
- # Get data configuration
- data_config = parse_data_config(args.data)
- train_path = data_config["train"]
- valid_path = data_config["valid"]
- class_names = load_classes(data_config["names"])
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
- # ############
- # Create model
- # ############
-
- model = load_model(args.model, args.pretrained_weights)
-
- # Print model
- if args.verbose:
- summary(model, input_size=(3, model.hyperparams['height'], model.hyperparams['height']))
-
- mini_batch_size = model.hyperparams['batch'] // model.hyperparams['subdivisions']
-
- # #################
- # Create Dataloader
- # #################
- # Load training dataloader
- dataloader = _create_data_loader(
- train_path,
- mini_batch_size,
- model.hyperparams['height'],
- args.n_cpu,
- args.multiscale_training,
- distributed=args.distributed
- )
-
- # Load validation dataloader
- validation_dataloader = _create_validation_data_loader(
- valid_path,
- mini_batch_size,
- model.hyperparams['height'],
- args.n_cpu
- )
-
- # ################
- # Create optimizer
- # ################
-
- params = [p for p in model.parameters() if p.requires_grad]
-
- if (model.hyperparams['optimizer'] in [None, "adam"]):
- optimizer = optim.Adam(
- params,
- lr=model.hyperparams['learning_rate'],
- weight_decay=model.hyperparams['decay'],
- )
- elif (model.hyperparams['optimizer'] == "sgd"):
- optimizer = optim.SGD(
- params,
- lr=model.hyperparams['learning_rate'],
- weight_decay=model.hyperparams['decay'],
- momentum=model.hyperparams['momentum'])
- else:
- print("Unknown optimizer. Please choose between (adam, sgd).")
-
- model_module = model
- if args.distributed:
- model = DDP(model, device_ids=[args.rank])
- model_module = model.module
-
- for epoch in range(args.epochs):
-
- print("\n---- Training Model ----")
- epoch_start_time = time.time()
-
- model.train() # Set model to training mode
-
- for batch_i, (_, imgs, targets) in enumerate(tqdm.tqdm(dataloader, desc=f"Training Epoch {epoch}")):
- batches_done = len(dataloader) * epoch + batch_i
-
- imgs = imgs.to(device, non_blocking=True)
- targets = targets.to(device)
-
- outputs = model(imgs)
-
- loss, loss_components = compute_loss(outputs, targets, model_module)
-
- loss.backward()
-
- ###############
- # Run optimizer
- ###############
-
- if batches_done % model_module.hyperparams['subdivisions'] == 0:
- # Adapt learning rate
- # Get learning rate defined in cfg
- lr = model_module.hyperparams['learning_rate']
- if batches_done < model_module.hyperparams['burn_in']:
- # Burn in
- lr *= (batches_done / model_module.hyperparams['burn_in'])
- else:
- # Set and parse the learning rate to the steps defined in the cfg
- for threshold, value in model_module.hyperparams['lr_steps']:
- if batches_done > threshold:
- lr *= value
- # Log the learning rate
- if rank in [-1, 0]:
- logger.scalar_summary("train/learning_rate", lr, batches_done)
- # Set learning rate
- for g in optimizer.param_groups:
- g['lr'] = lr
-
- # Run optimizer
- optimizer.step()
- # Reset gradients
- optimizer.zero_grad()
-
- # ############
- # Log progress
- # ############
- if args.verbose and rank in [-1, 0]:
- print(AsciiTable(
- [
- ["Type", "Value"],
- ["IoU loss", float(loss_components[0])],
- ["Object loss", float(loss_components[1])],
- ["Class loss", float(loss_components[2])],
- ["Loss", float(loss_components[3])],
- ["Batch loss", to_cpu(loss).item()],
- ]).table)
-
- # Tensorboard logging
- if rank in [-1, 0]:
- tensorboard_log = [
- ("train/iou_loss", float(loss_components[0])),
- ("train/obj_loss", float(loss_components[1])),
- ("train/class_loss", float(loss_components[2])),
- ("train/loss", to_cpu(loss).item())]
- logger.list_of_scalars_summary(tensorboard_log, batches_done)
-
- model_module.seen += imgs.size(0)
-
- # #############
- # Save progress
- # #############
-
- # Save model to checkpoint file
- if epoch % args.checkpoint_interval == 0 and rank in [-1, 0]:
- checkpoint_path = f"checkpoints/yolov3_ckpt_{epoch}.pth"
- print(f"---- Saving checkpoint to: '{checkpoint_path}' ----")
- torch.save(model.state_dict(), checkpoint_path)
-
- # ########
- # Evaluate
- # ########
-
- if epoch % args.evaluation_interval == 0:
- print("\n---- Evaluating Model ----")
- # Evaluate the model on the validation set
- metrics_output = _evaluate(
- model_module,
- validation_dataloader,
- class_names,
- img_size=model_module.hyperparams['height'],
- iou_thres=args.iou_thres,
- conf_thres=args.conf_thres,
- nms_thres=args.nms_thres,
- verbose=args.verbose
- )
-
- if metrics_output is not None and rank in [-1, 0]:
- precision, recall, AP, f1, ap_class = metrics_output
- evaluation_metrics = [
- ("validation/precision", precision.mean()),
- ("validation/recall", recall.mean()),
- ("validation/mAP", AP.mean()),
- ("validation/f1", f1.mean())]
- logger.list_of_scalars_summary(evaluation_metrics, epoch)
-
- epoch_total_time = time.time() - epoch_start_time
- epoch_total_time_str = str(datetime.timedelta(seconds=int(epoch_total_time)))
-
- fps = len(dataloader) * mini_batch_size / epoch_total_time
- if dist.is_initialized():
- fps = fps * dist.get_world_size()
-
- print('epoch time {}, Total FPS: {}'.format(epoch_total_time_str, fps))
-
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-if __name__ == "__main__":
- run()
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/__init__.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/augmentations.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/augmentations.py
deleted file mode 100644
index 5c1eb58a12fb4808ac052d1613d03ccbb8d3e24d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/augmentations.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import imgaug.augmenters as iaa
-from torchvision import transforms
-from pytorchyolo.utils.transforms import ToTensor, PadSquare, RelativeLabels, AbsoluteLabels, ImgAug
-
-
-class DefaultAug(ImgAug):
- def __init__(self, ):
- self.augmentations = iaa.Sequential([
- iaa.Sharpen((0.0, 0.1)),
- iaa.Affine(rotate=(-0, 0), translate_percent=(-0.1, 0.1), scale=(0.8, 1.5)),
- iaa.AddToBrightness((-60, 40)),
- iaa.AddToHue((-10, 10)),
- iaa.Fliplr(0.5),
- ])
-
-
-class StrongAug(ImgAug):
- def __init__(self, ):
- self.augmentations = iaa.Sequential([
- iaa.Dropout([0.0, 0.01]),
- iaa.Sharpen((0.0, 0.1)),
- iaa.Affine(rotate=(-10, 10), translate_percent=(-0.1, 0.1), scale=(0.8, 1.5)),
- iaa.AddToBrightness((-60, 40)),
- iaa.AddToHue((-20, 20)),
- iaa.Fliplr(0.5),
- ])
-
-
-AUGMENTATION_TRANSFORMS = transforms.Compose([
- AbsoluteLabels(),
- DefaultAug(),
- PadSquare(),
- RelativeLabels(),
- ToTensor(),
-])
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/datasets.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/datasets.py
deleted file mode 100644
index 206c9bfb905c1e684ef80ca8b14dd495989deffc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/datasets.py
+++ /dev/null
@@ -1,142 +0,0 @@
-from torch.utils.data import Dataset
-import torch.nn.functional as F
-import torch
-import glob
-import random
-import os
-import warnings
-import numpy as np
-from PIL import Image
-from PIL import ImageFile
-
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-
-
-def pad_to_square(img, pad_value):
- c, h, w = img.shape
- dim_diff = np.abs(h - w)
- # (upper / left) padding and (lower / right) padding
- pad1, pad2 = dim_diff // 2, dim_diff - dim_diff // 2
- # Determine padding
- pad = (0, 0, pad1, pad2) if h <= w else (pad1, pad2, 0, 0)
- # Add padding
- img = F.pad(img, pad, "constant", value=pad_value)
-
- return img, pad
-
-
-def resize(image, size):
- image = F.interpolate(image.unsqueeze(0), size=size, mode="nearest").squeeze(0)
- return image
-
-
-class ImageFolder(Dataset):
- def __init__(self, folder_path, transform=None):
- self.files = sorted(glob.glob("%s/*.*" % folder_path))
- self.transform = transform
-
- def __getitem__(self, index):
-
- img_path = self.files[index % len(self.files)]
- img = np.array(
- Image.open(img_path).convert('RGB'),
- dtype=np.uint8)
-
- # Label Placeholder
- boxes = np.zeros((1, 5))
-
- # Apply transforms
- if self.transform:
- img, _ = self.transform((img, boxes))
-
- return img_path, img
-
- def __len__(self):
- return len(self.files)
-
-
-class ListDataset(Dataset):
- def __init__(self, list_path, img_size=416, multiscale=True, transform=None):
- with open(list_path, "r") as file:
- self.img_files = file.readlines()
- self.label_files = []
- for path in self.img_files:
- image_dir = os.path.dirname(path)
- label_dir = "labels".join(image_dir.rsplit("images", 1))
- assert label_dir != image_dir, \
- f"Image path must contain a folder named 'images'! \n'{image_dir}'"
- label_file = os.path.join(label_dir, os.path.basename(path))
- label_file = os.path.splitext(label_file)[0] + '.txt'
- self.label_files.append(label_file)
-
- self.img_size = img_size
- self.max_objects = 100
- self.multiscale = multiscale
- self.min_size = self.img_size - 3 * 32
- self.max_size = self.img_size + 3 * 32
- self.batch_count = 0
- self.transform = transform
-
- def __getitem__(self, index):
-
- # ---------
- # Image
- # ---------
- try:
-
- img_path = self.img_files[index % len(self.img_files)].rstrip()
-
- img = np.array(Image.open(img_path).convert('RGB'), dtype=np.uint8)
- except Exception:
- print(f"Could not read image '{img_path}'.")
- return
-
- # ---------
- # Label
- # ---------
- try:
- label_path = self.label_files[index % len(self.img_files)].rstrip()
-
- # Ignore warning if file is empty
- with warnings.catch_warnings():
- warnings.simplefilter("ignore")
- boxes = np.loadtxt(label_path).reshape(-1, 5)
- except Exception:
- #print(f"Could not read label '{label_path}'.")
- return
-
- # -----------
- # Transform
- # -----------
- if self.transform:
- try:
- img, bb_targets = self.transform((img, boxes))
- except Exception:
- print("Could not apply transform.")
- return
-
- return img_path, img, bb_targets
-
- def collate_fn(self, batch):
- self.batch_count += 1
- # Drop invalid images
- batch = [data for data in batch if data is not None]
- paths, imgs, bb_targets = list(zip(*batch))
-
- # Selects new image size every tenth batch
- if self.multiscale and self.batch_count % 10 == 0:
- self.img_size = random.choice(
- range(self.min_size, self.max_size + 1, 32))
-
- # Resize images to input shape
- imgs = torch.stack([resize(img, self.img_size) for img in imgs])
-
- # Add sample index to targets
- for i, boxes in enumerate(bb_targets):
- boxes[:, 0] = i
- bb_targets = torch.cat(bb_targets, 0)
-
- return paths, imgs, bb_targets
-
- def __len__(self):
- return len(self.img_files)
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/logger.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/logger.py
deleted file mode 100644
index 26b606799ed0a254713ae0853c86f8af850cbd68..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/logger.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import os
-import datetime
-import torch
-
-try:
- from torch.utils.tensorboard import SummaryWriter
-except:
- class SummaryWriter(object):
- def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
- flush_secs=120, filename_suffix=''):
- if not log_dir:
- import socket
- from datetime import datetime
- current_time = datetime.now().strftime('%b%d_%H-%M-%S')
- log_dir = os.path.join(
- 'runs', current_time + '_' + socket.gethostname() + comment)
- self.log_dir = log_dir
- self.purge_step = purge_step
- self.max_queue = max_queue
- self.flush_secs = flush_secs
- self.filename_suffix = filename_suffix
-
- # Initialize the file writers, but they can be cleared out on close
- # and recreated later as needed.
- self.file_writer = self.all_writers = None
- self._get_file_writer()
-
- # Create default bins for histograms, see generate_testdata.py in tensorflow/tensorboard
- v = 1E-12
- buckets = []
- neg_buckets = []
- while v < 1E20:
- buckets.append(v)
- neg_buckets.append(-v)
- v *= 1.1
- self.default_bins = neg_buckets[::-1] + [0] + buckets
-
- def _check_caffe2_blob(self, item): pass
-
- def _get_file_writer(self): pass
-
- def get_logdir(self):
- """Returns the directory where event files will be written."""
- return self.log_dir
-
- def add_hparams(self, hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None): pass
-
- def add_scalar(self, tag, scalar_value, global_step=None, walltime=None, new_style=False): pass
-
- def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None): pass
-
- def add_histogram(self, tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None): pass
-
- def add_histogram_raw(self, tag, min, max, num, sum, sum_squares, bucket_limits, bucket_counts, global_step=None, walltime=None): pass
-
- def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats='CHW'): pass
-
- def add_images(self, tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW'): pass
-
- def add_image_with_boxes(self, tag, img_tensor, box_tensor, global_step=None, walltime=None, rescale=1, dataformats='CHW', labels=None): pass
-
- def add_figure(self, tag, figure, global_step=None, close=True, walltime=None): pass
-
- def add_video(self, tag, vid_tensor, global_step=None, fps=4, walltime=None): pass
-
- def add_audio(self, tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None): pass
-
- def add_text(self, tag, text_string, global_step=None, walltime=None): pass
-
- def add_onnx_graph(self, prototxt): pass
-
- def add_graph(self, model, input_to_model=None, verbose=False): pass
-
- @staticmethod
- def _encode(rawstr): pass
-
- def add_embedding(self, mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None): pass
-
- def add_pr_curve(self, tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_pr_curve_raw(self, tag, true_positive_counts, false_positive_counts, true_negative_counts, false_negative_counts, precision, recall, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_custom_scalars_multilinechart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars_marginchart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars(self, layout): pass
-
- def add_mesh(self, tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None): pass
-
- def flush(self): pass
-
- def close(self): pass
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.close()
-
-
-
-class Logger(object):
- def __init__(self, log_dir, log_hist=True):
- """Create a summary writer logging to log_dir."""
- if log_hist: # Check a new folder for each log should be dreated
- log_dir = os.path.join(
- log_dir,
- datetime.datetime.now().strftime("%Y_%m_%d__%H_%M_%S"))
- self.writer = SummaryWriter(log_dir)
-
- def scalar_summary(self, tag, value, step):
- """Log a scalar variable."""
- self.writer.add_scalar(tag, value, step)
-
- def list_of_scalars_summary(self, tag_value_pairs, step):
- """Log scalar variables."""
- for tag, value in tag_value_pairs:
- self.writer.add_scalar(tag, value, step)
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/loss.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/loss.py
deleted file mode 100644
index 1a3f091da65d0d2e7913a72d49db0bfeb1b271b1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/loss.py
+++ /dev/null
@@ -1,251 +0,0 @@
-import math
-
-import torch
-import torch.nn as nn
-
-from .utils import to_cpu
-
-# This new loss function is based on https://github.com/ultralytics/yolov3/blob/master/utils/loss.py
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-9):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
- if GIoU or DIoU or CIoU:
- # convex (smallest enclosing box) width
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1)
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * \
- torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha = v / ((1 + eps) - iou + v)
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
- # return positive, negative label smoothing BCE targets
- return 1.0 - 0.5 * eps, 0.5 * eps
-
-
-class BCEBlurWithLogitsLoss(nn.Module):
- # BCEwithLogitLoss() with reduced missing label effects.
- def __init__(self, alpha=0.05):
- super(BCEBlurWithLogitsLoss, self).__init__()
- self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
- self.alpha = alpha
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- pred = torch.sigmoid(pred) # prob from logits
- dx = pred - true # reduce only missing label effects
- # dx = (pred - true).abs() # reduce missing label and false label effects
- alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
- loss *= alpha_factor
- return loss.mean()
-
-
-class FocalLoss(nn.Module):
- # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(FocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- # p_t = torch.exp(-loss)
- # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
-
- # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
- pred_prob = torch.sigmoid(pred) # prob from logits
- p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = (1.0 - p_t) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class QFocalLoss(nn.Module):
- # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(QFocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
-
- pred_prob = torch.sigmoid(pred) # prob from logits
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = torch.abs(true - pred_prob) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-def compute_loss(predictions, targets, model): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- tcls, tbox, indices, anchors = build_targets(predictions, targets, model) # targets
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(
- pos_weight=torch.tensor([1.0], device=device))
- BCEobj = nn.BCEWithLogitsLoss(
- pos_weight=torch.tensor([1.0], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- cp, cn = smooth_BCE(eps=0.0)
-
- # Focal loss
- gamma = 0 # focal loss gamma
- if gamma > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, gamma), FocalLoss(BCEobj, gamma)
-
- # Losses
- # layer index, layer predictions
- for layer_index, layer_predictions in enumerate(predictions):
- # image, anchor, gridy, gridx
- b, anchor, grid_j, grid_i = indices[layer_index]
- tobj = torch.zeros_like(layer_predictions[..., 0], device=device) # target obj
-
- num_targets = b.shape[0] # number of targets
- if num_targets:
- # prediction subset corresponding to targets
- ps = layer_predictions[b, anchor, grid_j, grid_i]
-
- # Regression
- pxy = ps[:, :2].sigmoid()
- pwh = torch.exp(ps[:, 2:4]) * anchors[layer_index]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- # iou(prediction, target)
- iou = bbox_iou(pbox.T, tbox[layer_index], x1y1x2y2=False, CIoU=True)
- lbox += (1.0 - iou).mean() # iou loss
-
- model.gr = 1
-
- # Objectness
- tobj[b, anchor, grid_j, grid_i] = \
- (1.0 - model.gr) + model.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- if ps.size(1) - 5 > 1:
- t = torch.full_like(ps[:, 5:], cn, device=device) # targets
- t[range(num_targets), tcls[layer_index]] = cp
- lcls += BCEcls(ps[:, 5:], t) # BCE
-
- lobj += BCEobj(layer_predictions[..., 4], tobj) # obj loss
-
- lbox *= 0.05 * (3. / 2)
- lobj *= (3. / 2)
- lcls *= 0.31
- batch_size = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
-
- return loss * batch_size, to_cpu(torch.cat((lbox, lobj, lcls, loss)))
-
-
-def build_targets(p, targets, model):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = 3, targets.shape[0] # number of anchors, targets #TODO
- tcls, tbox, indices, anch = [], [], [], []
- gain = torch.ones(7, device=targets.device) # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- # append anchor indices
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2)
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0]], device=targets.device).float() * g # offsets
-
- for i, yolo_layer in enumerate(model.yolo_layers):
- anchors = yolo_layer.anchors / yolo_layer.stride
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < 4 # compare #TODO
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j),))
- t = t.repeat((off.shape[0], 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
-
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- # image, anchor, grid indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1)))
- tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
- anch.append(anchors[a]) # anchors
- tcls.append(c) # class
-
- return tcls, tbox, indices, anch
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/parse_config.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/parse_config.py
deleted file mode 100644
index 4c9fa7f742072cee4f6276652151f2ceb1f7afba..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/parse_config.py
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-def parse_model_config(path):
- """Parses the yolo-v3 layer configuration file and returns module definitions"""
- file = open(path, 'r')
- lines = file.read().split('\n')
- lines = [x for x in lines if x and not x.startswith('#')]
- lines = [x.rstrip().lstrip() for x in lines] # get rid of fringe whitespaces
- module_defs = []
- for line in lines:
- if line.startswith('['): # This marks the start of a new block
- module_defs.append({})
- module_defs[-1]['type'] = line[1:-1].rstrip()
- if module_defs[-1]['type'] == 'convolutional':
- module_defs[-1]['batch_normalize'] = 0
- else:
- key, value = line.split("=")
- value = value.strip()
- module_defs[-1][key.rstrip()] = value.strip()
-
- return module_defs
-
-
-def parse_data_config(path):
- """Parses the data configuration file"""
- options = dict()
- options['gpus'] = '0,1,2,3'
- options['num_workers'] = '10'
- with open(path, 'r') as fp:
- lines = fp.readlines()
- for line in lines:
- line = line.strip()
- if line == '' or line.startswith('#'):
- continue
- key, value = line.split('=')
- options[key.strip()] = value.strip()
- return options
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/transforms.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/transforms.py
deleted file mode 100644
index 141ae9328b81c6200c956e783ca77e9ca6be324a..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/transforms.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import torch
-import torch.nn.functional as F
-import numpy as np
-
-import imgaug.augmenters as iaa
-from imgaug.augmentables.bbs import BoundingBox, BoundingBoxesOnImage
-
-from .utils import xywh2xyxy_np
-import torchvision.transforms as transforms
-
-
-class ImgAug(object):
- def __init__(self, augmentations=[]):
- self.augmentations = augmentations
-
- def __call__(self, data):
- # Unpack data
- img, boxes = data
-
- # Convert xywh to xyxy
- boxes = np.array(boxes)
- boxes[:, 1:] = xywh2xyxy_np(boxes[:, 1:])
-
- # Convert bounding boxes to imgaug
- bounding_boxes = BoundingBoxesOnImage(
- [BoundingBox(*box[1:], label=box[0]) for box in boxes],
- shape=img.shape)
-
- # Apply augmentations
- img, bounding_boxes = self.augmentations(
- image=img,
- bounding_boxes=bounding_boxes)
-
- # Clip out of image boxes
- bounding_boxes = bounding_boxes.clip_out_of_image()
-
- # Convert bounding boxes back to numpy
- boxes = np.zeros((len(bounding_boxes), 5))
- for box_idx, box in enumerate(bounding_boxes):
- # Extract coordinates for unpadded + unscaled image
- x1 = box.x1
- y1 = box.y1
- x2 = box.x2
- y2 = box.y2
-
- # Returns (x, y, w, h)
- boxes[box_idx, 0] = box.label
- boxes[box_idx, 1] = ((x1 + x2) / 2)
- boxes[box_idx, 2] = ((y1 + y2) / 2)
- boxes[box_idx, 3] = (x2 - x1)
- boxes[box_idx, 4] = (y2 - y1)
-
- return img, boxes
-
-
-class RelativeLabels(object):
- def __init__(self, ):
- pass
-
- def __call__(self, data):
- img, boxes = data
- h, w, _ = img.shape
- boxes[:, [1, 3]] /= w
- boxes[:, [2, 4]] /= h
- return img, boxes
-
-
-class AbsoluteLabels(object):
- def __init__(self, ):
- pass
-
- def __call__(self, data):
- img, boxes = data
- h, w, _ = img.shape
- boxes[:, [1, 3]] *= w
- boxes[:, [2, 4]] *= h
- return img, boxes
-
-
-class PadSquare(ImgAug):
- def __init__(self, ):
- self.augmentations = iaa.Sequential([
- iaa.PadToAspectRatio(
- 1.0,
- position="center-center").to_deterministic()
- ])
-
-
-class ToTensor(object):
- def __init__(self, ):
- pass
-
- def __call__(self, data):
- img, boxes = data
- # Extract image as PyTorch tensor
- img = transforms.ToTensor()(img)
-
- bb_targets = torch.zeros((len(boxes), 6))
- bb_targets[:, 1:] = transforms.ToTensor()(boxes)
-
- return img, bb_targets
-
-
-class Resize(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, data):
- img, boxes = data
- img = F.interpolate(img.unsqueeze(0), size=self.size, mode="nearest").squeeze(0)
- return img, boxes
-
-
-DEFAULT_TRANSFORMS = transforms.Compose([
- AbsoluteLabels(),
- PadSquare(),
- RelativeLabels(),
- ToTensor(),
-])
diff --git a/cv/detection/yolov3/pytorch/pytorchyolo/utils/utils.py b/cv/detection/yolov3/pytorch/pytorchyolo/utils/utils.py
deleted file mode 100644
index 316ad5ce8e371f95ca9232d1161ad6e186264730..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/pytorchyolo/utils/utils.py
+++ /dev/null
@@ -1,387 +0,0 @@
-from __future__ import division
-
-import os
-import time
-import platform
-import tqdm
-import torch
-import torch.nn as nn
-import torchvision
-import numpy as np
-import subprocess
-import random
-
-
-def provide_determinism(seed=42):
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
-
- torch.backends.cudnn.benchmark = False
- torch.backends.cudnn.deterministic = True
-
-def worker_seed_set(worker_id):
- # See for details of numpy:
- # https://github.com/pytorch/pytorch/issues/5059#issuecomment-817392562
- # See for details of random:
- # https://pytorch.org/docs/stable/notes/randomness.html#dataloader
-
- # NumPy
- uint64_seed = torch.initial_seed()
- ss = np.random.SeedSequence([uint64_seed])
- np.random.seed(ss.generate_state(4))
-
- # random
- worker_seed = torch.initial_seed() % 2**32
- random.seed(worker_seed)
-
-
-def to_cpu(tensor):
- return tensor.detach().cpu()
-
-
-def load_classes(path):
- """
- Loads class labels at 'path'
- """
- with open(path, "r") as fp:
- names = fp.read().splitlines()
- return names
-
-
-def weights_init_normal(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- nn.init.normal_(m.weight.data, 0.0, 0.02)
- elif classname.find("BatchNorm2d") != -1:
- nn.init.normal_(m.weight.data, 1.0, 0.02)
- nn.init.constant_(m.bias.data, 0.0)
-
-
-def rescale_boxes(boxes, current_dim, original_shape):
- """
- Rescales bounding boxes to the original shape
- """
- orig_h, orig_w = original_shape
-
- # The amount of padding that was added
- pad_x = max(orig_h - orig_w, 0) * (current_dim / max(original_shape))
- pad_y = max(orig_w - orig_h, 0) * (current_dim / max(original_shape))
-
- # Image height and width after padding is removed
- unpad_h = current_dim - pad_y
- unpad_w = current_dim - pad_x
-
- # Rescale bounding boxes to dimension of original image
- boxes[:, 0] = ((boxes[:, 0] - pad_x // 2) / unpad_w) * orig_w
- boxes[:, 1] = ((boxes[:, 1] - pad_y // 2) / unpad_h) * orig_h
- boxes[:, 2] = ((boxes[:, 2] - pad_x // 2) / unpad_w) * orig_w
- boxes[:, 3] = ((boxes[:, 3] - pad_y // 2) / unpad_h) * orig_h
- return boxes
-
-
-def xywh2xyxy(x):
- y = x.new(x.shape)
- y[..., 0] = x[..., 0] - x[..., 2] / 2
- y[..., 1] = x[..., 1] - x[..., 3] / 2
- y[..., 2] = x[..., 0] + x[..., 2] / 2
- y[..., 3] = x[..., 1] + x[..., 3] / 2
- return y
-
-
-def xywh2xyxy_np(x):
- y = np.zeros_like(x)
- y[..., 0] = x[..., 0] - x[..., 2] / 2
- y[..., 1] = x[..., 1] - x[..., 3] / 2
- y[..., 2] = x[..., 0] + x[..., 2] / 2
- y[..., 3] = x[..., 1] + x[..., 3] / 2
- return y
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (list).
- conf: Objectness value from 0-1 (list).
- pred_cls: Predicted object classes (list).
- target_cls: True object classes (list).
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
-
- # Create Precision-Recall curve and compute AP for each class
- ap, p, r = [], [], []
- for c in tqdm.tqdm(unique_classes, desc="Computing AP"):
- i = pred_cls == c
- n_gt = (target_cls == c).sum() # Number of ground truth objects
- n_p = i.sum() # Number of predicted objects
-
- if n_p == 0 and n_gt == 0:
- continue
- elif n_p == 0 or n_gt == 0:
- ap.append(0)
- r.append(0)
- p.append(0)
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum()
- tpc = (tp[i]).cumsum()
-
- # Recall
- recall_curve = tpc / (n_gt + 1e-16)
- r.append(recall_curve[-1])
-
- # Precision
- precision_curve = tpc / (tpc + fpc)
- p.append(precision_curve[-1])
-
- # AP from recall-precision curve
- ap.append(compute_ap(recall_curve, precision_curve))
-
- # Compute F1 score (harmonic mean of precision and recall)
- p, r, ap = np.array(p), np.array(r), np.array(ap)
- f1 = 2 * p * r / (p + r + 1e-16)
-
- return p, r, ap, f1, unique_classes.astype("int32")
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves.
- Code originally from https://github.com/rbgirshick/py-faster-rcnn.
-
- # Arguments
- recall: The recall curve (list).
- precision: The precision curve (list).
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
- # correct AP calculation
- # first append sentinel values at the end
- mrec = np.concatenate(([0.0], recall, [1.0]))
- mpre = np.concatenate(([0.0], precision, [0.0]))
-
- # compute the precision envelope
- for i in range(mpre.size - 1, 0, -1):
- mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
-
- # to calculate area under PR curve, look for points
- # where X axis (recall) changes value
- i = np.where(mrec[1:] != mrec[:-1])[0]
-
- # and sum (\Delta recall) * prec
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
- return ap
-
-
-def get_batch_statistics(outputs, targets, iou_threshold):
- """ Compute true positives, predicted scores and predicted labels per sample """
- batch_metrics = []
- for sample_i in range(len(outputs)):
-
- if outputs[sample_i] is None:
- continue
-
- output = outputs[sample_i]
- pred_boxes = output[:, :4]
- pred_scores = output[:, 4]
- pred_labels = output[:, -1]
-
- true_positives = np.zeros(pred_boxes.shape[0])
-
- annotations = targets[targets[:, 0] == sample_i][:, 1:]
- target_labels = annotations[:, 0] if len(annotations) else []
- if len(annotations):
- detected_boxes = []
- target_boxes = annotations[:, 1:]
-
- for pred_i, (pred_box, pred_label) in enumerate(zip(pred_boxes, pred_labels)):
-
- # If targets are found break
- if len(detected_boxes) == len(annotations):
- break
-
- # Ignore if label is not one of the target labels
- if pred_label not in target_labels:
- continue
-
- iou, box_index = bbox_iou(pred_box.unsqueeze(0), target_boxes).max(0)
- if iou >= iou_threshold and box_index not in detected_boxes:
- true_positives[pred_i] = 1
- detected_boxes += [box_index]
- batch_metrics.append([true_positives, pred_scores, pred_labels])
- return batch_metrics
-
-
-def bbox_wh_iou(wh1, wh2):
- wh2 = wh2.t()
- w1, h1 = wh1[0], wh1[1]
- w2, h2 = wh2[0], wh2[1]
- inter_area = torch.min(w1, w2) * torch.min(h1, h2)
- union_area = (w1 * h1 + 1e-16) + w2 * h2 - inter_area
- return inter_area / union_area
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True):
- """
- Returns the IoU of two bounding boxes
- """
- if not x1y1x2y2:
- # Transform from center and width to exact coordinates
- b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2
- b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2
- b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2
- b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2
- else:
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = \
- box1[:, 0], box1[:, 1], box1[:, 2], box1[:, 3]
- b2_x1, b2_y1, b2_x2, b2_y2 = \
- box2[:, 0], box2[:, 1], box2[:, 2], box2[:, 3]
-
- # get the corrdinates of the intersection rectangle
- inter_rect_x1 = torch.max(b1_x1, b2_x1)
- inter_rect_y1 = torch.max(b1_y1, b2_y1)
- inter_rect_x2 = torch.min(b1_x2, b2_x2)
- inter_rect_y2 = torch.min(b1_y2, b2_y2)
- # Intersection area
- inter_area = torch.clamp(inter_rect_x2 - inter_rect_x1 + 1, min=0) * torch.clamp(
- inter_rect_y2 - inter_rect_y1 + 1, min=0
- )
- # Union Area
- b1_area = (b1_x2 - b1_x1 + 1) * (b1_y2 - b1_y1 + 1)
- b2_area = (b2_x2 - b2_x1 + 1) * (b2_y2 - b2_y1 + 1)
-
- iou = inter_area / (b1_area + b2_area - inter_area + 1e-16)
-
- return iou
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) -
- torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- # iou = inter / (area1 + area2 - inter)
- return inter / (area1[:, None] + area2 - inter)
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None):
- """Performs Non-Maximum Suppression (NMS) on inference results
- Returns:
- detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
- """
-
- nc = prediction.shape[2] - 5 # number of classes
-
- # Settings
- # (pixels) minimum and maximum box width and height
- max_wh = 4096
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 1.0 # seconds to quit after
- multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img)
-
- t = time.time()
- output = [torch.zeros((0, 6), device="cpu")] * prediction.shape[0]
-
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[x[..., 4] > conf_thres] # confidence
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- # sort by confidence
- x = x[x[:, 4].argsort(descending=True)[:max_nms]]
-
- # Batched NMS
- c = x[:, 5:6] * max_wh # classes
- # boxes (offset by class), scores
- boxes, scores = x[:, :4] + c, x[:, 4]
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
-
- output[xi] = to_cpu(x[i])
-
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def print_environment_info():
- """
- Prints infos about the environment and the system.
- This should help when people make issues containg the printout.
- """
-
- print("Environment information:")
-
- # Print OS information
- print(f"System: {platform.system()} {platform.release()}")
-
- # Print poetry package version
- try:
- print(f"Current Version: {subprocess.check_output(['poetry', 'version'], stderr=subprocess.DEVNULL).decode('ascii').strip()}")
- except (subprocess.CalledProcessError, FileNotFoundError):
- print("Not using the poetry package")
-
- # Print commit hash if possible
- try:
- print(f"Current Commit Hash: {subprocess.check_output(['git', 'rev-parse', '--short', 'HEAD'], stderr=subprocess.DEVNULL).decode('ascii').strip()}")
- except (subprocess.CalledProcessError, FileNotFoundError):
- print("No git or repo found")
diff --git a/cv/detection/yolov3/pytorch/requirements.txt b/cv/detection/yolov3/pytorch/requirements.txt
deleted file mode 100644
index a23141b0426cb04cc6a5fdb1f9a0ab289b4eb081..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/requirements.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-imgaug
-terminaltables
-torchsummary
diff --git a/cv/detection/yolov3/pytorch/run_dist_training.sh b/cv/detection/yolov3/pytorch/run_dist_training.sh
deleted file mode 100644
index 0968d3789a94c42b875ffe572f266a96643070bc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/run_dist_training.sh
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/bin/bash
-export PYTHONPATH=$PYTHONPATH:`pwd`
-
-LOG_DIR="logs"
-if [ ! -d "$LOG_DIR" ]; then
- mkdir -p ${LOG_DIR}
-fi
-DATE=`date +%Y%m%d%H%M%S`
-
-source ./get_num_devices.sh
-
-# Run finetuning
-python3 -m torch.distributed.launch --nproc_per_node=$IX_NUM_CUDA_VISIBLE_DEVICES --use_env \
- ./pytorchyolo/train.py --pretrained_weights checkpoints/yolov3_voc_pretrain.pth \
- --second_stage_steps 200 "$@" 2>&1 | tee ${LOG_DIR}/training_${DATE}.log
-
-if [[ ${PIPESTATUS[0]} != 0 ]]; then
- echo "ERROR: finetuning on VOC failed"
- exit 1
-fi
-
-exit 0
diff --git a/cv/detection/yolov3/pytorch/run_training.sh b/cv/detection/yolov3/pytorch/run_training.sh
deleted file mode 100644
index fb51026db73b96cc007d99e0e10273d456d138f8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/run_training.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash
-export PYTHONPATH=$PYTHONPATH:`pwd`
-
-LOG_DIR="logs"
-if [ ! -d "$LOG_DIR" ]; then
- mkdir -p ${LOG_DIR}
-fi
-DATE=`date +%Y%m%d%H%M%S`
-
-# Run finetuning
-python3 pytorchyolo/train.py "$@" 2>&1 | tee ${LOG_DIR}/training_${DATE}.log
-
-if [[ ${PIPESTATUS[0]} != 0 ]]; then
- echo "ERROR: finetuning on VOC failed"
- exit 1
-fi
-
-exit 0
diff --git a/cv/detection/yolov3/pytorch/setup.sh b/cv/detection/yolov3/pytorch/setup.sh
deleted file mode 100644
index e58f014ced7f4c779b8683f111c4387139749ef6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/setup.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-# Install packages
-echo "Start installing packages..."
-pip3 install tqdm
-pip3 install terminaltables
-ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')
-if [[ ${ID} == "ubuntu" ]]; then
- echo ${ID}
-apt -y install libgl1-mesa-glx
-apt -y install libgeos-dev
-elif [[ ${ID} == "Loongnix" ]]; then
- echo ${ID}
-apt -y install libgl1-mesa-glx
-apt -y install libgeos-dev
-elif [[ ${ID} == "centos" ]]; then
- echo ${ID}
-yum -y install mesa-libGL
-yum -y install geos-devel
-elif [[ ${ID} == "kylin" ]]; then
- echo ${ID}
-yum -y install mesa-libGL
-yum -y install geos-devel
-else
- echo "Unable to determine OS..."
-fi
-pip3 install cython # Will automatically install opencv-python
-pip3 install imgaug # Will automatically install opencv-python
-pip3 install torchsummary
-
-echo "Finished installing packages."
-
diff --git a/cv/detection/yolov3/pytorch/voc_annotation.py b/cv/detection/yolov3/pytorch/voc_annotation.py
deleted file mode 100644
index 358161ef6538e951cf49ea6b4fae4538890812d2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/voc_annotation.py
+++ /dev/null
@@ -1,82 +0,0 @@
-'''
-Description:
-Author: Liwei Dai
-Date: 2021-05-10 19:35:41
-LastEditors: VSCode
-LastEditTime: 2021-05-10 19:38:34
-'''
-import os
-import argparse
-import xml.etree.ElementTree as ET
-
-def convert_voc_annotation(data_path, data_type, anno_path, use_difficult_bbox=True):
-
- classes = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
- 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',
- 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa',
- 'train', 'tvmonitor']
- img_inds_file = os.path.join(data_path, 'ImageSets', 'Main', data_type + '.txt')
- # Rename the folder containing images
- try:
- os.rename(os.path.join(data_path, 'JPEGImages'), os.path.join(data_path, 'images'))
- except FileNotFoundError:
- print("JPEGImages folder has already been renamed to images")
- with open(img_inds_file, 'r') as f:
- txt = f.readlines()
- image_inds = [line.strip() for line in txt]
-
- os.makedirs(os.path.join(data_path, 'labels'), exist_ok=True)
- with open(anno_path, 'a') as f:
- for image_ind in image_inds:
- image_path = os.path.join(data_path, 'images', image_ind + '.jpg')
- label_path = os.path.join(data_path, 'labels', image_ind + '.txt') # This will be created
- anno_path = os.path.join(data_path, 'Annotations', image_ind + '.xml')
- root = ET.parse(anno_path).getroot()
- objects = root.findall('object')
- labels = []
- for obj in objects:
- difficult = obj.find('difficult').text.strip()
- if (not use_difficult_bbox) and(int(difficult) == 1):
- continue
- bbox = obj.find('bndbox')
- class_ind = classes.index(obj.find('name').text.lower().strip())
- xmin = int(bbox.find('xmin').text.strip())
- xmax = int(bbox.find('xmax').text.strip())
- ymin = int(bbox.find('ymin').text.strip())
- ymax = int(bbox.find('ymax').text.strip())
- annotation = os.path.join(data_path, 'labels', image_ind + '.txt')
- img_size = root.find('size')
- h, w = int(img_size.find('height').text.strip()), int(img_size.find('width').text.strip())
-
- # Prepare for labels
- x_center, y_center = (xmin + xmax) / 2 / w, (ymin + ymax) / 2 / h
- h_obj, w_obj = abs(xmax - xmin) /w , abs(ymax - ymin) /h
-
- label = ' '.join(str(i) for i in [class_ind, x_center, y_center, w_obj, h_obj])
- labels.append(label)
- with open(label_path, 'w') as f_label:
- f_label.writelines("%s\n" % l for l in labels)
- f.write(image_path + "\n")
- return len(image_inds)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--data_path", default="./VOC/")
- parser.add_argument("--train_annotation", default="./data/voc/train.txt")
- parser.add_argument("--test_annotation", default="./data/voc/valid.txt")
- flags = parser.parse_args()
-
- if os.path.exists(flags.train_annotation):os.remove(flags.train_annotation)
- if os.path.exists(flags.test_annotation):os.remove(flags.test_annotation)
- if os.path.dirname(flags.train_annotation):
- os.makedirs(os.path.dirname(flags.train_annotation), exist_ok=True)
- if os.path.dirname(flags.train_annotation):
- os.makedirs(os.path.dirname(flags.test_annotation), exist_ok=True)
-
- num1 = convert_voc_annotation(os.path.join(flags.data_path, 'train/VOCdevkit/VOC2007'), 'trainval', flags.train_annotation, False)
- num2 = convert_voc_annotation(os.path.join(flags.data_path, 'train/VOCdevkit/VOC2012'), 'trainval', flags.train_annotation, False)
- num3 = convert_voc_annotation(os.path.join(flags.data_path, 'test/VOCdevkit/VOC2007'), 'test', flags.test_annotation, False)
- print('=> The number of image for train is: %d\tThe number of image for test is:%d' %(num1 + num2, num3))
-
-
diff --git a/cv/detection/yolov3/pytorch/weights/download_weights.sh b/cv/detection/yolov3/pytorch/weights/download_weights.sh
deleted file mode 100644
index d78133853cdcedd5161afc56dba1ccc0bd2cbce6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov3/pytorch/weights/download_weights.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-# Download weights for vanilla YOLOv3
-wget -c "https://pjreddie.com/media/files/yolov3.weights" --header "Referer: pjreddie.com"
-# # Download weights for tiny YOLOv3
-wget -c "https://pjreddie.com/media/files/yolov3-tiny.weights" --header "Referer: pjreddie.com"
-# Download weights for backbone network
-wget -c "https://pjreddie.com/media/files/darknet53.conv.74" --header "Referer: pjreddie.com"
diff --git a/cv/detection/yolov5/pytorch/.dockerignore b/cv/detection/yolov5/pytorch/.dockerignore
deleted file mode 100644
index 9c9663f006cab1cc285c5f5903e49b26d6f88523..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/.dockerignore
+++ /dev/null
@@ -1,216 +0,0 @@
-# Repo-specific DockerIgnore -------------------------------------------------------------------------------------------
-#.git
-.cache
-.idea
-runs
-output
-coco
-storage.googleapis.com
-
-data/samples/*
-**/results*.txt
-*.jpg
-
-# Neural Network weights -----------------------------------------------------------------------------------------------
-**/*.pt
-**/*.pth
-**/*.onnx
-**/*.mlmodel
-**/*.torchscript
-**/*.torchscript.pt
-
-
-# Below Copied From .gitignore -----------------------------------------------------------------------------------------
-# Below Copied From .gitignore -----------------------------------------------------------------------------------------
-
-
-# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
-# Byte-compiled / optimized / DLL files
-__pycache__/
-*.py[cod]
-*$py.class
-
-# C extensions
-*.so
-
-# Distribution / packaging
-.Python
-env/
-build/
-develop-eggs/
-dist/
-downloads/
-eggs/
-.eggs/
-lib/
-lib64/
-parts/
-sdist/
-var/
-wheels/
-*.egg-info/
-wandb/
-.installed.cfg
-*.egg
-
-# PyInstaller
-# Usually these files are written by a python script from a template
-# before PyInstaller builds the exe, so as to inject date/other infos into it.
-*.manifest
-*.spec
-
-# Installer logs
-pip-log.txt
-pip-delete-this-directory.txt
-
-# Unit test / coverage reports
-htmlcov/
-.tox/
-.coverage
-.coverage.*
-.cache
-nosetests.xml
-coverage.xml
-*.cover
-.hypothesis/
-
-# Translations
-*.mo
-*.pot
-
-# Django stuff:
-*.log
-local_settings.py
-
-# Flask stuff:
-instance/
-.webassets-cache
-
-# Scrapy stuff:
-.scrapy
-
-# Sphinx documentation
-docs/_build/
-
-# PyBuilder
-target/
-
-# Jupyter Notebook
-.ipynb_checkpoints
-
-# pyenv
-.python-version
-
-# celery beat schedule file
-celerybeat-schedule
-
-# SageMath parsed files
-*.sage.py
-
-# dotenv
-.env
-
-# virtualenv
-.venv*
-venv*/
-ENV*/
-
-# Spyder project settings
-.spyderproject
-.spyproject
-
-# Rope project settings
-.ropeproject
-
-# mkdocs documentation
-/site
-
-# mypy
-.mypy_cache/
-
-
-# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------
-
-# General
-.DS_Store
-.AppleDouble
-.LSOverride
-
-# Icon must end with two \r
-Icon
-Icon?
-
-# Thumbnails
-._*
-
-# Files that might appear in the root of a volume
-.DocumentRevisions-V100
-.fseventsd
-.Spotlight-V100
-.TemporaryItems
-.Trashes
-.VolumeIcon.icns
-.com.apple.timemachine.donotpresent
-
-# Directories potentially created on remote AFP share
-.AppleDB
-.AppleDesktop
-Network Trash Folder
-Temporary Items
-.apdisk
-
-
-# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
-# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
-# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
-
-# User-specific stuff:
-.idea/*
-.idea/**/workspace.xml
-.idea/**/tasks.xml
-.idea/dictionaries
-.html # Bokeh Plots
-.pg # TensorFlow Frozen Graphs
-.avi # videos
-
-# Sensitive or high-churn files:
-.idea/**/dataSources/
-.idea/**/dataSources.ids
-.idea/**/dataSources.local.xml
-.idea/**/sqlDataSources.xml
-.idea/**/dynamic.xml
-.idea/**/uiDesigner.xml
-
-# Gradle:
-.idea/**/gradle.xml
-.idea/**/libraries
-
-# CMake
-cmake-build-debug/
-cmake-build-release/
-
-# Mongo Explorer plugin:
-.idea/**/mongoSettings.xml
-
-## File-based project format:
-*.iws
-
-## Plugin-specific files:
-
-# IntelliJ
-out/
-
-# mpeltonen/sbt-idea plugin
-.idea_modules/
-
-# JIRA plugin
-atlassian-ide-plugin.xml
-
-# Cursive Clojure plugin
-.idea/replstate.xml
-
-# Crashlytics plugin (for Android Studio and IntelliJ)
-com_crashlytics_export_strings.xml
-crashlytics.properties
-crashlytics-build.properties
-fabric.properties
diff --git a/cv/detection/yolov5/pytorch/.gitignore b/cv/detection/yolov5/pytorch/.gitignore
deleted file mode 100644
index 31facb0c7e85e3e2819d32ca25fa1c0c291d2f09..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/.gitignore
+++ /dev/null
@@ -1,4 +0,0 @@
-**/__pycache__/
-runs
-weights
-datasets
diff --git a/cv/detection/yolov5/pytorch/CONTRIBUTING.md b/cv/detection/yolov5/pytorch/CONTRIBUTING.md
deleted file mode 100644
index 7c0ba3ae9f180a72d1edd3ceffd342404fb742e5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/CONTRIBUTING.md
+++ /dev/null
@@ -1,70 +0,0 @@
-## Contributing to YOLOv5 🚀
-
-We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's:
-
-- Reporting a bug
-- Discussing the current state of the code
-- Submitting a fix
-- Proposing a new feature
-- Becoming a maintainer
-
-YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃!
-
-
-## Submitting a Pull Request (PR) 🛠️
-Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
-
-### 1. Select File to Update
-Select `requirements.txt` to update by clicking on it in GitHub.
-
-
-### 2. Click 'Edit this file'
-Button is in top-right corner.
-
-
-### 3. Make Changes
-Change `matplotlib` version from `3.2.2` to `3.3`.
-
-
-### 4. Preview Changes and Submit PR
-Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
-
-
-### PR recommendations
-
-To allow your work to be integrated as seamlessly as possible, we advise you to:
-- ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
-```bash
-git remote add upstream https://github.com/ultralytics/yolov5.git
-git fetch upstream
-git checkout feature # <----- replace 'feature' with local branch name
-git merge upstream/master
-git push -u origin -f
-```
-- ✅ Verify all Continuous Integration (CI) **checks are passing**.
-- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ -Bruce Lee
-
-
-## Submitting a Bug Report 🐛
-
-If you spot a problem with YOLOv5 please submit a Bug Report!
-
-For us to start investigating a possibel problem we need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started.
-
-When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces the problem should be:
-
-* ✅ **Minimal** – Use as little code as possible that still produces the same problem
-* ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
-* ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
-
-In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be:
-
-* ✅ **Current** – Verify that your code is up-to-date with current GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits.
-* ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
-
-If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better understand and diagnose your problem.
-
-
-## License
-
-By contributing, you agree that your contributions will be licensed under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)
diff --git a/cv/detection/yolov5/pytorch/Dockerfile b/cv/detection/yolov5/pytorch/Dockerfile
deleted file mode 100644
index e22c1106f23db1f84555b005c26fe8e26d1d47f8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/Dockerfile
+++ /dev/null
@@ -1,50 +0,0 @@
-# Start FROM Nvidia PyTorch image https://ngc.nvidia.com/catalog/containers/nvidia:pytorch
-FROM nvcr.io/nvidia/pytorch:21.05-py3
-
-# Install linux packages
-RUN apt update && apt install -y zip htop screen libgl1-mesa-glx
-
-# Install python dependencies
-COPY requirements.txt .
-RUN python -m pip install --upgrade pip
-RUN pip uninstall -y nvidia-tensorboard nvidia-tensorboard-plugin-dlprof
-RUN pip install --no-cache -r requirements.txt coremltools onnx gsutil notebook
-RUN pip install --no-cache -U torch torchvision numpy
-# RUN pip install --no-cache torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
-
-# Create working directory
-RUN mkdir -p /usr/src/app
-WORKDIR /usr/src/app
-
-# Copy contents
-COPY . /usr/src/app
-
-# Set environment variables
-ENV HOME=/usr/src/app
-
-
-# Usage Examples -------------------------------------------------------------------------------------------------------
-
-# Build and Push
-# t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t
-
-# Pull and Run
-# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t
-
-# Pull and Run with local directory access
-# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/datasets:/usr/src/datasets $t
-
-# Kill all
-# sudo docker kill $(sudo docker ps -q)
-
-# Kill all image-based
-# sudo docker kill $(sudo docker ps -qa --filter ancestor=ultralytics/yolov5:latest)
-
-# Bash into running container
-# sudo docker exec -it 5a9b5863d93d bash
-
-# Bash into stopped container
-# id=$(sudo docker ps -qa) && sudo docker start $id && sudo docker exec -it $id bash
-
-# Clean up
-# docker system prune -a --volumes
diff --git a/cv/detection/yolov5/pytorch/LICENSE b/cv/detection/yolov5/pytorch/LICENSE
deleted file mode 100644
index 9e419e042146a2ce2e354202d4f7d8e4a3d66f31..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/LICENSE
+++ /dev/null
@@ -1,674 +0,0 @@
-GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
\ No newline at end of file
diff --git a/cv/detection/yolov5/pytorch/README.md b/cv/detection/yolov5/pytorch/README.md
index fb3b33ddc6f70db3dcd3eff19180fd2bdc5bea27..b202d415aac24061f952392a1ee3fa1fa1eb3d34 100644
--- a/cv/detection/yolov5/pytorch/README.md
+++ b/cv/detection/yolov5/pytorch/README.md
@@ -4,8 +4,10 @@ YOLOv5 🚀 is a family of object detection architectures and models pretrained
## Step 1: Installing packages
-```shell
-# install libGL, requirements.
+```bash
+## clone yolov5 and install
+git clone https://gitee.com/deep-spark/deepsparkhub-GPL.git
+cd deepsparkhub-GPL/cv/detection/yolov5/pytorch/
bash init.sh
```
@@ -37,7 +39,7 @@ coco2017
Modify the configuration file(data/coco.yaml)
```bash
-$ vim data/coco.yaml
+vim data/coco.yaml
# path: the root of coco data
# train: the relative path of train images
# val: the relative path of valid images
@@ -50,13 +52,13 @@ Train the yolov5 model as follows, the train log is saved in ./runs/train/exp
### On single GPU
```bash
-$ python3 train.py --data ./data/coco.yaml --batch-size 32 --cfg ./models/yolov5s.yaml --weights ''
+python3 train.py --data ./data/coco.yaml --batch-size 32 --cfg ./models/yolov5s.yaml --weights ''
```
### On single GPU (AMP)
```bash
-$ python3 train.py --data ./data/coco.yaml --batch-size 32 --cfg ./models/yolov5s.yaml --weights '' --amp
+python3 train.py --data ./data/coco.yaml --batch-size 32 --cfg ./models/yolov5s.yaml --weights '' --amp
```
### Multiple GPUs on one machine
@@ -64,7 +66,7 @@ $ python3 train.py --data ./data/coco.yaml --batch-size 32 --cfg ./models/yolov5
```bash
# eight cards
# YOLOv5s
-$ python3 -m torch.distributed.launch --nproc_per_node 8 \
+python3 -m torch.distributed.launch --nproc_per_node 8 \
train.py \
--data ./data/coco.yaml \
--batch-size 64 \
@@ -72,14 +74,14 @@ $ python3 -m torch.distributed.launch --nproc_per_node 8 \
--device 0,1,2,3,4,5,6,7
# YOLOv5m
-$ bash run.sh
+bash run.sh
```
### Multiple GPUs on one machine (AMP)
```bash
# eight cards
-$ python3 -m torch.distributed.launch --nproc_per_node 8 \
+python3 -m torch.distributed.launch --nproc_per_node 8 \
train.py \
--data ./data/coco.yaml \
--batch-size 256 \
@@ -92,24 +94,23 @@ $ python3 -m torch.distributed.launch --nproc_per_node 8 \
Test the yolov5 model as follows, the result is saved in ./runs/detect:
```bash
-$ python3 detect.py --source ./data/images/bus.jpg --weights yolov5s.pt --img 640
+python3 detect.py --source ./data/images/bus.jpg --weights yolov5s.pt --img 640
-$ python3 detect.py --source ./data/images/zidane.jpg --weights yolov5s.pt --img 640
+python3 detect.py --source ./data/images/zidane.jpg --weights yolov5s.pt --img 640
```
## Results on BI-V100
| GPUs | FP16 | Batch size | FPS | E2E | mAP@.5 |
-| ------ | ------ | ------------ | ----- | ----- | -------- |
+| ---- | ---- | ---------- | --- | --- | ------ |
| 1x1 | True | 64 | 81 | N/A | N/A |
| 1x8 | True | 64 | 598 | 24h | 0.632 |
-
| Convergence criteria | Configuration (x denotes number of GPUs) | Performance | Accuracy | Power(W) | Scalability | Memory utilization(G) | Stability |
-| ---------------------- | ------------------------------------------ | ------------- | ---------- | ------------ | ------------- | ------------------------- | ----------- |
+| -------------------- | ---------------------------------------- | ----------- | -------- | ---------- | ----------- | ----------------------- | --------- |
| mAP:0.5 | SDK V2.2, bs:128, 8x, AMP | 1228 | 0.56 | 140\*8 | 0.92 | 27.3\*8 | 1 |
## Reference
-https://github.com/ultralytics/yolov5
+- [YOLOv5](https://github.com/ultralytics/yolov5)
diff --git a/cv/detection/yolov5/pytorch/data/Argoverse_HD.yaml b/cv/detection/yolov5/pytorch/data/Argoverse_HD.yaml
deleted file mode 100644
index ad1a52254d746a00ebcab5a3acaad974d0195b78..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/Argoverse_HD.yaml
+++ /dev/null
@@ -1,66 +0,0 @@
-# Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/
-# Train command: python train.py --data Argoverse_HD.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/Argoverse
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/Argoverse # dataset root dir
-train: Argoverse-1.1/images/train/ # train images (relative to 'path') 39384 images
-val: Argoverse-1.1/images/val/ # val images (relative to 'path') 15062 images
-test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview
-
-# Classes
-nc: 8 # number of classes
-names: [ 'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign' ] # class names
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- import json
-
- from tqdm import tqdm
- from utils.general import download, Path
-
-
- def argoverse2yolo(set):
- labels = {}
- a = json.load(open(set, "rb"))
- for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv5 format..."):
- img_id = annot['image_id']
- img_name = a['images'][img_id]['name']
- img_label_name = img_name[:-3] + "txt"
-
- cls = annot['category_id'] # instance class id
- x_center, y_center, width, height = annot['bbox']
- x_center = (x_center + width / 2) / 1920.0 # offset and scale
- y_center = (y_center + height / 2) / 1200.0 # offset and scale
- width /= 1920.0 # scale
- height /= 1200.0 # scale
-
- img_dir = set.parents[2] / 'Argoverse-1.1' / 'labels' / a['seq_dirs'][a['images'][annot['image_id']]['sid']]
- if not img_dir.exists():
- img_dir.mkdir(parents=True, exist_ok=True)
-
- k = str(img_dir / img_label_name)
- if k not in labels:
- labels[k] = []
- labels[k].append(f"{cls} {x_center} {y_center} {width} {height}\n")
-
- for k in labels:
- with open(k, "w") as f:
- f.writelines(labels[k])
-
-
- # Download
- dir = Path('../datasets/Argoverse') # dataset root dir
- urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip']
- download(urls, dir=dir, delete=False)
-
- # Convert
- annotations_dir = 'Argoverse-HD/annotations/'
- (dir / 'Argoverse-1.1' / 'tracking').rename(dir / 'Argoverse-1.1' / 'images') # rename 'tracking' to 'images'
- for d in "train.json", "val.json":
- argoverse2yolo(dir / annotations_dir / d) # convert VisDrone annotations to YOLO labels
diff --git a/cv/detection/yolov5/pytorch/data/GlobalWheat2020.yaml b/cv/detection/yolov5/pytorch/data/GlobalWheat2020.yaml
deleted file mode 100644
index b77534944ed7f2b479f453fa0edb43ad9465517f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/GlobalWheat2020.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-# Global Wheat 2020 dataset http://www.global-wheat.com/
-# Train command: python train.py --data GlobalWheat2020.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/GlobalWheat2020
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/GlobalWheat2020 # dataset root dir
-train: # train images (relative to 'path') 3422 images
- - images/arvalis_1
- - images/arvalis_2
- - images/arvalis_3
- - images/ethz_1
- - images/rres_1
- - images/inrae_1
- - images/usask_1
-val: # val images (relative to 'path') 748 images (WARNING: train set contains ethz_1)
- - images/ethz_1
-test: # test images (optional) 1276 images
- - images/utokyo_1
- - images/utokyo_2
- - images/nau_1
- - images/uq_1
-
-# Classes
-nc: 1 # number of classes
-names: [ 'wheat_head' ] # class names
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- from utils.general import download, Path
-
- # Download
- dir = Path(yaml['path']) # dataset root dir
- urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',
- 'https://github.com/ultralytics/yolov5/releases/download/v1.0/GlobalWheat2020_labels.zip']
- download(urls, dir=dir)
-
- # Make Directories
- for p in 'annotations', 'images', 'labels':
- (dir / p).mkdir(parents=True, exist_ok=True)
-
- # Move
- for p in 'arvalis_1', 'arvalis_2', 'arvalis_3', 'ethz_1', 'rres_1', 'inrae_1', 'usask_1', \
- 'utokyo_1', 'utokyo_2', 'nau_1', 'uq_1':
- (dir / p).rename(dir / 'images' / p) # move to /images
- f = (dir / p).with_suffix('.json') # json file
- if f.exists():
- f.rename((dir / 'annotations' / p).with_suffix('.json')) # move to /annotations
diff --git a/cv/detection/yolov5/pytorch/data/Objects365.yaml b/cv/detection/yolov5/pytorch/data/Objects365.yaml
deleted file mode 100644
index e365c82cab08fc241a46dbdd1f5d0b73d20c87a7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/Objects365.yaml
+++ /dev/null
@@ -1,103 +0,0 @@
-# Objects365 dataset https://www.objects365.org/
-# Train command: python train.py --data Objects365.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/Objects365
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/Objects365 # dataset root dir
-train: images/train # train images (relative to 'path') 1742289 images
-val: images/val # val images (relative to 'path') 5570 images
-test: # test images (optional)
-
-# Classes
-nc: 365 # number of classes
-names: [ 'Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
- 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
- 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
- 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
- 'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
- 'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
- 'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
- 'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
- 'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
- 'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
- 'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
- 'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
- 'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
- 'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
- 'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
- 'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
- 'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
- 'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
- 'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
- 'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
- 'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
- 'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
- 'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
- 'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
- 'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
- 'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
- 'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
- 'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
- 'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
- 'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
- 'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
- 'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
- 'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
- 'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
- 'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
- 'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
- 'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
- 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
- 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
- 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
- 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis' ]
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- from pycocotools.coco import COCO
- from tqdm import tqdm
-
- from utils.general import download, Path
-
- # Make Directories
- dir = Path(yaml['path']) # dataset root dir
- for p in 'images', 'labels':
- (dir / p).mkdir(parents=True, exist_ok=True)
- for q in 'train', 'val':
- (dir / p / q).mkdir(parents=True, exist_ok=True)
-
- # Download
- url = "https://dorc.ks3-cn-beijing.ksyun.com/data-set/2020Objects365%E6%95%B0%E6%8D%AE%E9%9B%86/train/"
- download([url + 'zhiyuan_objv2_train.tar.gz'], dir=dir, delete=False) # annotations json
- download([url + f for f in [f'patch{i}.tar.gz' for i in range(51)]], dir=dir / 'images' / 'train',
- curl=True, delete=False, threads=8)
-
- # Move
- train = dir / 'images' / 'train'
- for f in tqdm(train.rglob('*.jpg'), desc=f'Moving images'):
- f.rename(train / f.name) # move to /images/train
-
- # Labels
- coco = COCO(dir / 'zhiyuan_objv2_train.json')
- names = [x["name"] for x in coco.loadCats(coco.getCatIds())]
- for cid, cat in enumerate(names):
- catIds = coco.getCatIds(catNms=[cat])
- imgIds = coco.getImgIds(catIds=catIds)
- for im in tqdm(coco.loadImgs(imgIds), desc=f'Class {cid + 1}/{len(names)} {cat}'):
- width, height = im["width"], im["height"]
- path = Path(im["file_name"]) # image filename
- try:
- with open(dir / 'labels' / 'train' / path.with_suffix('.txt').name, 'a') as file:
- annIds = coco.getAnnIds(imgIds=im["id"], catIds=catIds, iscrowd=None)
- for a in coco.loadAnns(annIds):
- x, y, w, h = a['bbox'] # bounding box in xywh (xy top-left corner)
- x, y = x + w / 2, y + h / 2 # xy to center
- file.write(f"{cid} {x / width:.5f} {y / height:.5f} {w / width:.5f} {h / height:.5f}\n")
-
- except Exception as e:
- print(e)
diff --git a/cv/detection/yolov5/pytorch/data/SKU-110K.yaml b/cv/detection/yolov5/pytorch/data/SKU-110K.yaml
deleted file mode 100644
index 7087bb9c2893e68788ec2c12f5248f0298b7fcc6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/SKU-110K.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-# SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19
-# Train command: python train.py --data SKU-110K.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/SKU-110K
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/SKU-110K # dataset root dir
-train: train.txt # train images (relative to 'path') 8219 images
-val: val.txt # val images (relative to 'path') 588 images
-test: test.txt # test images (optional) 2936 images
-
-# Classes
-nc: 1 # number of classes
-names: [ 'object' ] # class names
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- import shutil
- from tqdm import tqdm
- from utils.general import np, pd, Path, download, xyxy2xywh
-
- # Download
- dir = Path(yaml['path']) # dataset root dir
- parent = Path(dir.parent) # download dir
- urls = ['http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz']
- download(urls, dir=parent, delete=False)
-
- # Rename directories
- if dir.exists():
- shutil.rmtree(dir)
- (parent / 'SKU110K_fixed').rename(dir) # rename dir
- (dir / 'labels').mkdir(parents=True, exist_ok=True) # create labels dir
-
- # Convert labels
- names = 'image', 'x1', 'y1', 'x2', 'y2', 'class', 'image_width', 'image_height' # column names
- for d in 'annotations_train.csv', 'annotations_val.csv', 'annotations_test.csv':
- x = pd.read_csv(dir / 'annotations' / d, names=names).values # annotations
- images, unique_images = x[:, 0], np.unique(x[:, 0])
- with open((dir / d).with_suffix('.txt').__str__().replace('annotations_', ''), 'w') as f:
- f.writelines(f'./images/{s}\n' for s in unique_images)
- for im in tqdm(unique_images, desc=f'Converting {dir / d}'):
- cls = 0 # single-class dataset
- with open((dir / 'labels' / im).with_suffix('.txt'), 'a') as f:
- for r in x[images == im]:
- w, h = r[6], r[7] # image width, height
- xywh = xyxy2xywh(np.array([[r[1] / w, r[2] / h, r[3] / w, r[4] / h]]))[0] # instance
- f.write(f"{cls} {xywh[0]:.5f} {xywh[1]:.5f} {xywh[2]:.5f} {xywh[3]:.5f}\n") # write label
diff --git a/cv/detection/yolov5/pytorch/data/VOC.yaml b/cv/detection/yolov5/pytorch/data/VOC.yaml
deleted file mode 100644
index 3d878fa67a605c65b392dce2b3621b89ed5dd389..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/VOC.yaml
+++ /dev/null
@@ -1,79 +0,0 @@
-# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/
-# Train command: python train.py --data VOC.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/VOC
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/VOC
-train: # train images (relative to 'path') 16551 images
- - images/train2012
- - images/train2007
- - images/val2012
- - images/val2007
-val: # val images (relative to 'path') 4952 images
- - images/test2007
-test: # test images (optional)
- - images/test2007
-
-# Classes
-nc: 20 # number of classes
-names: [ 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
- 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' ] # class names
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- import xml.etree.ElementTree as ET
-
- from tqdm import tqdm
- from utils.general import download, Path
-
-
- def convert_label(path, lb_path, year, image_id):
- def convert_box(size, box):
- dw, dh = 1. / size[0], 1. / size[1]
- x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
- return x * dw, y * dh, w * dw, h * dh
-
- in_file = open(path / f'VOC{year}/Annotations/{image_id}.xml')
- out_file = open(lb_path, 'w')
- tree = ET.parse(in_file)
- root = tree.getroot()
- size = root.find('size')
- w = int(size.find('width').text)
- h = int(size.find('height').text)
-
- for obj in root.iter('object'):
- cls = obj.find('name').text
- if cls in yaml['names'] and not int(obj.find('difficult').text) == 1:
- xmlbox = obj.find('bndbox')
- bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])
- cls_id = yaml['names'].index(cls) # class id
- out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n')
-
-
- # Download
- dir = Path(yaml['path']) # dataset root dir
- url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
- urls = [url + 'VOCtrainval_06-Nov-2007.zip', # 446MB, 5012 images
- url + 'VOCtest_06-Nov-2007.zip', # 438MB, 4953 images
- url + 'VOCtrainval_11-May-2012.zip'] # 1.95GB, 17126 images
- download(urls, dir=dir / 'images', delete=False)
-
- # Convert
- path = dir / f'images/VOCdevkit'
- for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):
- imgs_path = dir / 'images' / f'{image_set}{year}'
- lbs_path = dir / 'labels' / f'{image_set}{year}'
- imgs_path.mkdir(exist_ok=True, parents=True)
- lbs_path.mkdir(exist_ok=True, parents=True)
-
- image_ids = open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt').read().strip().split()
- for id in tqdm(image_ids, desc=f'{image_set}{year}'):
- f = path / f'VOC{year}/JPEGImages/{id}.jpg' # old img path
- lb_path = (lbs_path / f.name).with_suffix('.txt') # new label path
- f.rename(imgs_path / f.name) # move image
- convert_label(path, lb_path, year, id) # convert labels to YOLO format
diff --git a/cv/detection/yolov5/pytorch/data/VisDrone.yaml b/cv/detection/yolov5/pytorch/data/VisDrone.yaml
deleted file mode 100644
index c1cd38d1e10fc148d774342d0ef252c88e9a7645..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/VisDrone.yaml
+++ /dev/null
@@ -1,60 +0,0 @@
-# VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset
-# Train command: python train.py --data VisDrone.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/VisDrone
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/VisDrone # dataset root dir
-train: VisDrone2019-DET-train/images # train images (relative to 'path') 6471 images
-val: VisDrone2019-DET-val/images # val images (relative to 'path') 548 images
-test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images
-
-# Classes
-nc: 10 # number of classes
-names: [ 'pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor' ]
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- from utils.general import download, os, Path
-
- def visdrone2yolo(dir):
- from PIL import Image
- from tqdm import tqdm
-
- def convert_box(size, box):
- # Convert VisDrone box to YOLO xywh box
- dw = 1. / size[0]
- dh = 1. / size[1]
- return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh
-
- (dir / 'labels').mkdir(parents=True, exist_ok=True) # make labels directory
- pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')
- for f in pbar:
- img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size
- lines = []
- with open(f, 'r') as file: # read annotation.txt
- for row in [x.split(',') for x in file.read().strip().splitlines()]:
- if row[4] == '0': # VisDrone 'ignored regions' class 0
- continue
- cls = int(row[5]) - 1
- box = convert_box(img_size, tuple(map(int, row[:4])))
- lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
- with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:
- fl.writelines(lines) # write label.txt
-
-
- # Download
- dir = Path(yaml['path']) # dataset root dir
- urls = ['https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-train.zip',
- 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',
- 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',
- 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']
- download(urls, dir=dir)
-
- # Convert
- for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':
- visdrone2yolo(dir / d) # convert VisDrone annotations to YOLO labels
diff --git a/cv/detection/yolov5/pytorch/data/coco.yaml b/cv/detection/yolov5/pytorch/data/coco.yaml
deleted file mode 100644
index ab72c8242f6cb5de719eb7ef9299cc2fb9fb3ab5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/coco.yaml
+++ /dev/null
@@ -1,43 +0,0 @@
-# COCO 2017 dataset http://cocodataset.org
-# Train command: python train.py --data coco.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/coco
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ./datasets/coco # dataset root dir
-train: train2017.txt # train images (relative to 'path') 118287 images
-val: val2017.txt # train images (relative to 'path') 5000 images
-test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
-
-# Classes
-nc: 80 # number of classes
-names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
- 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
- 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
- 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
- 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
- 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
- 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
- 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
- 'hair drier', 'toothbrush' ] # class names
-
-
-# Download script/URL (optional)
-download: |
- from utils.general import download, Path
-
- # Download labels
- segments = False # segment or box labels
- dir = Path(yaml['path']) # dataset root dir
- url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
- urls = [url + ('coco2017labels-segments.zip' if segments else 'coco2017labels.zip')] # labels
- download(urls, dir=dir.parent)
-
- # Download data
- urls = ['http://images.cocodataset.org/zips/train2017.zip', # 19G, 118k images
- 'http://images.cocodataset.org/zips/val2017.zip', # 1G, 5k images
- 'http://images.cocodataset.org/zips/test2017.zip'] # 7G, 41k images (optional)
- download(urls, dir=dir / 'images', threads=3)
diff --git a/cv/detection/yolov5/pytorch/data/coco128.yaml b/cv/detection/yolov5/pytorch/data/coco128.yaml
deleted file mode 100644
index e75628dad26935e9a433cc443a762ec9b4d28f47..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/coco128.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-# COCO 2017 dataset http://cocodataset.org - first 128 training images
-# Train command: python train.py --data coco128.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/coco128
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ./datasets/coco128 # dataset root dir
-train: images/train2017 # train images (relative to 'path') 128 images
-val: images/train2017 # val images (relative to 'path') 128 images
-test: # test images (optional)
-
-# Classes
-nc: 80 # number of classes
-names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
- 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
- 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
- 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
- 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
- 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
- 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
- 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
- 'hair drier', 'toothbrush' ] # class names
-
-
-# Download script/URL (optional)
-download: https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip
diff --git a/cv/detection/yolov5/pytorch/data/hyps/hyp.finetune.yaml b/cv/detection/yolov5/pytorch/data/hyps/hyp.finetune.yaml
deleted file mode 100644
index 237cd5bc19a1802f7cb364d657e31b623a64dee8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/hyps/hyp.finetune.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-# Hyperparameters for VOC finetuning
-# python train.py --batch 64 --weights yolov5m.pt --data VOC.yaml --img 512 --epochs 50
-# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
-
-
-# Hyperparameter Evolution Results
-# Generations: 306
-# P R mAP.5 mAP.5:.95 box obj cls
-# Metrics: 0.6 0.936 0.896 0.684 0.0115 0.00805 0.00146
-
-lr0: 0.0032
-lrf: 0.12
-momentum: 0.843
-weight_decay: 0.00036
-warmup_epochs: 2.0
-warmup_momentum: 0.5
-warmup_bias_lr: 0.05
-box: 0.0296
-cls: 0.243
-cls_pw: 0.631
-obj: 0.301
-obj_pw: 0.911
-iou_t: 0.2
-anchor_t: 2.91
-# anchors: 3.63
-fl_gamma: 0.0
-hsv_h: 0.0138
-hsv_s: 0.664
-hsv_v: 0.464
-degrees: 0.373
-translate: 0.245
-scale: 0.898
-shear: 0.602
-perspective: 0.0
-flipud: 0.00856
-fliplr: 0.5
-mosaic: 1.0
-mixup: 0.243
-copy_paste: 0.0
diff --git a/cv/detection/yolov5/pytorch/data/hyps/hyp.finetune_objects365.yaml b/cv/detection/yolov5/pytorch/data/hyps/hyp.finetune_objects365.yaml
deleted file mode 100644
index 435fa7a451191cb11041b4eb9a950b040766cfe3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/hyps/hyp.finetune_objects365.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-lr0: 0.00258
-lrf: 0.17
-momentum: 0.779
-weight_decay: 0.00058
-warmup_epochs: 1.33
-warmup_momentum: 0.86
-warmup_bias_lr: 0.0711
-box: 0.0539
-cls: 0.299
-cls_pw: 0.825
-obj: 0.632
-obj_pw: 1.0
-iou_t: 0.2
-anchor_t: 3.44
-anchors: 3.2
-fl_gamma: 0.0
-hsv_h: 0.0188
-hsv_s: 0.704
-hsv_v: 0.36
-degrees: 0.0
-translate: 0.0902
-scale: 0.491
-shear: 0.0
-perspective: 0.0
-flipud: 0.0
-fliplr: 0.5
-mosaic: 1.0
-mixup: 0.0
-copy_paste: 0.0
diff --git a/cv/detection/yolov5/pytorch/data/hyps/hyp.scratch-p6.yaml b/cv/detection/yolov5/pytorch/data/hyps/hyp.scratch-p6.yaml
deleted file mode 100644
index fc1d8ebe087604d949f8188d90644435525eeaa5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/hyps/hyp.scratch-p6.yaml
+++ /dev/null
@@ -1,34 +0,0 @@
-# Hyperparameters for COCO training from scratch
-# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
-# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
-
-
-lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
-lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
-momentum: 0.937 # SGD momentum/Adam beta1
-weight_decay: 0.0005 # optimizer weight decay 5e-4
-warmup_epochs: 3.0 # warmup epochs (fractions ok)
-warmup_momentum: 0.8 # warmup initial momentum
-warmup_bias_lr: 0.1 # warmup initial bias lr
-box: 0.05 # box loss gain
-cls: 0.3 # cls loss gain
-cls_pw: 1.0 # cls BCELoss positive_weight
-obj: 0.7 # obj loss gain (scale with pixels)
-obj_pw: 1.0 # obj BCELoss positive_weight
-iou_t: 0.20 # IoU training threshold
-anchor_t: 4.0 # anchor-multiple threshold
-# anchors: 3 # anchors per output layer (0 to ignore)
-fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
-hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
-hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
-hsv_v: 0.4 # image HSV-Value augmentation (fraction)
-degrees: 0.0 # image rotation (+/- deg)
-translate: 0.1 # image translation (+/- fraction)
-scale: 0.9 # image scale (+/- gain)
-shear: 0.0 # image shear (+/- deg)
-perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
-flipud: 0.0 # image flip up-down (probability)
-fliplr: 0.5 # image flip left-right (probability)
-mosaic: 1.0 # image mosaic (probability)
-mixup: 0.0 # image mixup (probability)
-copy_paste: 0.0 # segment copy-paste (probability)
diff --git a/cv/detection/yolov5/pytorch/data/hyps/hyp.scratch.yaml b/cv/detection/yolov5/pytorch/data/hyps/hyp.scratch.yaml
deleted file mode 100644
index b2cf2e32c6384c11dd8a9605d71340ee8122944c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/hyps/hyp.scratch.yaml
+++ /dev/null
@@ -1,34 +0,0 @@
-# Hyperparameters for COCO training from scratch
-# python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300
-# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
-
-
-lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
-lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
-momentum: 0.937 # SGD momentum/Adam beta1
-weight_decay: 0.0005 # optimizer weight decay 5e-4
-warmup_epochs: 3.0 # warmup epochs (fractions ok)
-warmup_momentum: 0.8 # warmup initial momentum
-warmup_bias_lr: 0.1 # warmup initial bias lr
-box: 0.05 # box loss gain
-cls: 0.5 # cls loss gain
-cls_pw: 1.0 # cls BCELoss positive_weight
-obj: 1.0 # obj loss gain (scale with pixels)
-obj_pw: 1.0 # obj BCELoss positive_weight
-iou_t: 0.20 # IoU training threshold
-anchor_t: 4.0 # anchor-multiple threshold
-# anchors: 3 # anchors per output layer (0 to ignore)
-fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
-hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
-hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
-hsv_v: 0.4 # image HSV-Value augmentation (fraction)
-degrees: 0.0 # image rotation (+/- deg)
-translate: 0.1 # image translation (+/- fraction)
-scale: 0.5 # image scale (+/- gain)
-shear: 0.0 # image shear (+/- deg)
-perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
-flipud: 0.0 # image flip up-down (probability)
-fliplr: 0.5 # image flip left-right (probability)
-mosaic: 1.0 # image mosaic (probability)
-mixup: 0.0 # image mixup (probability)
-copy_paste: 0.0 # segment copy-paste (probability)
diff --git a/cv/detection/yolov5/pytorch/data/images/bus.jpg b/cv/detection/yolov5/pytorch/data/images/bus.jpg
deleted file mode 100644
index b43e311165c785f000eb7493ff8fb662d06a3f83..0000000000000000000000000000000000000000
Binary files a/cv/detection/yolov5/pytorch/data/images/bus.jpg and /dev/null differ
diff --git a/cv/detection/yolov5/pytorch/data/images/bus_res.jpg b/cv/detection/yolov5/pytorch/data/images/bus_res.jpg
deleted file mode 100644
index ab88dfea438c9b3792cdcd4c827328f3b0f0aeed..0000000000000000000000000000000000000000
Binary files a/cv/detection/yolov5/pytorch/data/images/bus_res.jpg and /dev/null differ
diff --git a/cv/detection/yolov5/pytorch/data/images/zidane.jpg b/cv/detection/yolov5/pytorch/data/images/zidane.jpg
deleted file mode 100644
index 92d72ea124760ce5dbf9425e3aa8f371e7481328..0000000000000000000000000000000000000000
Binary files a/cv/detection/yolov5/pytorch/data/images/zidane.jpg and /dev/null differ
diff --git a/cv/detection/yolov5/pytorch/data/images/zidane_res.jpg b/cv/detection/yolov5/pytorch/data/images/zidane_res.jpg
deleted file mode 100644
index 69699523ed0c81de4005393754a836780b177eaa..0000000000000000000000000000000000000000
Binary files a/cv/detection/yolov5/pytorch/data/images/zidane_res.jpg and /dev/null differ
diff --git a/cv/detection/yolov5/pytorch/data/scripts/download_weights.sh b/cv/detection/yolov5/pytorch/data/scripts/download_weights.sh
deleted file mode 100644
index 6a279f1636fc6f3aaa6c7f9b083ed63ff82576c0..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/data/scripts/download_weights.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-# Download latest models from https://github.com/ultralytics/yolov5/releases
-# Usage:
-# $ bash path/to/download_weights.sh
-
-python - < NOTE: DOWNLOAD DATA MANUALLY from URL above and unzip to /datasets/xView before running train command below
-# Train command: python train.py --data xView.yaml
-# Default dataset location is next to YOLOv5:
-# /parent
-# /datasets/xView
-# /yolov5
-
-
-# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
-path: ../datasets/xView # dataset root dir
-train: images/autosplit_train.txt # train images (relative to 'path') 90% of 847 train images
-val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 train images
-
-# Classes
-nc: 60 # number of classes
-names: [ 'Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
- 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
- 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
- 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
- 'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
- 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
- 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
- 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
- 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower' ] # class names
-
-
-# Download script/URL (optional) ---------------------------------------------------------------------------------------
-download: |
- import json
- import os
- from pathlib import Path
-
- import numpy as np
- from PIL import Image
- from tqdm import tqdm
-
- from utils.datasets import autosplit
- from utils.general import download, xyxy2xywhn
-
-
- def convert_labels(fname=Path('xView/xView_train.geojson')):
- # Convert xView geoJSON labels to YOLO format
- path = fname.parent
- with open(fname) as f:
- print(f'Loading {fname}...')
- data = json.load(f)
-
- # Make dirs
- labels = Path(path / 'labels' / 'train')
- os.system(f'rm -rf {labels}')
- labels.mkdir(parents=True, exist_ok=True)
-
- # xView classes 11-94 to 0-59
- xview_class2index = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11,
- 12, 13, 14, 15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1,
- 29, 30, 31, 32, 33, 34, 35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46,
- 47, 48, 49, -1, 50, 51, -1, 52, -1, -1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]
-
- shapes = {}
- for feature in tqdm(data['features'], desc=f'Converting {fname}'):
- p = feature['properties']
- if p['bounds_imcoords']:
- id = p['image_id']
- file = path / 'train_images' / id
- if file.exists(): # 1395.tif missing
- try:
- box = np.array([int(num) for num in p['bounds_imcoords'].split(",")])
- assert box.shape[0] == 4, f'incorrect box shape {box.shape[0]}'
- cls = p['type_id']
- cls = xview_class2index[int(cls)] # xView class to 0-60
- assert 59 >= cls >= 0, f'incorrect class index {cls}'
-
- # Write YOLO label
- if id not in shapes:
- shapes[id] = Image.open(file).size
- box = xyxy2xywhn(box[None].astype(np.float), w=shapes[id][0], h=shapes[id][1], clip=True)
- with open((labels / id).with_suffix('.txt'), 'a') as f:
- f.write(f"{cls} {' '.join(f'{x:.6f}' for x in box[0])}\n") # write label.txt
- except Exception as e:
- print(f'WARNING: skipping one label for {file}: {e}')
-
-
- # Download manually from https://challenge.xviewdataset.org
- dir = Path(yaml['path']) # dataset root dir
- # urls = ['https://d307kc0mrhucc3.cloudfront.net/train_labels.zip', # train labels
- # 'https://d307kc0mrhucc3.cloudfront.net/train_images.zip', # 15G, 847 train images
- # 'https://d307kc0mrhucc3.cloudfront.net/val_images.zip'] # 5G, 282 val images (no labels)
- # download(urls, dir=dir, delete=False)
-
- # Convert labels
- convert_labels(dir / 'xView_train.geojson')
-
- # Move images
- images = Path(dir / 'images')
- images.mkdir(parents=True, exist_ok=True)
- Path(dir / 'train_images').rename(dir / 'images' / 'train')
- Path(dir / 'val_images').rename(dir / 'images' / 'val')
-
- # Split
- autosplit(dir / 'images' / 'train')
diff --git a/cv/detection/yolov5/pytorch/detect.py b/cv/detection/yolov5/pytorch/detect.py
deleted file mode 100644
index 44b33eb42289149b678a330e92c6a67c7cfea1fa..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/detect.py
+++ /dev/null
@@ -1,228 +0,0 @@
-"""Run inference with a YOLOv5 model on images, videos, directories, streams
-
-Usage:
- $ python path/to/detect.py --source path/to/img.jpg --weights yolov5s.pt --img 640
-"""
-
-import argparse
-import sys
-import time
-from pathlib import Path
-
-import cv2
-import torch
-import torch.backends.cudnn as cudnn
-
-FILE = Path(__file__).absolute()
-sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
-
-from models.experimental import attempt_load
-from utils.datasets import LoadStreams, LoadImages
-from utils.general import check_img_size, check_requirements, check_imshow, colorstr, non_max_suppression, \
- apply_classifier, scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path, save_one_box
-from utils.plots import colors, plot_one_box
-from utils.torch_utils import select_device, load_classifier, time_synchronized
-
-
-@torch.no_grad()
-def run(weights='yolov5s.pt', # model.pt path(s)
- source='data/images', # file/dir/URL/glob, 0 for webcam
- imgsz=640, # inference size (pixels)
- conf_thres=0.25, # confidence threshold
- iou_thres=0.45, # NMS IOU threshold
- max_det=1000, # maximum detections per image
- device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
- view_img=False, # show results
- save_txt=False, # save results to *.txt
- save_conf=False, # save confidences in --save-txt labels
- save_crop=False, # save cropped prediction boxes
- nosave=False, # do not save images/videos
- classes=None, # filter by class: --class 0, or --class 0 2 3
- agnostic_nms=False, # class-agnostic NMS
- augment=False, # augmented inference
- visualize=False, # visualize features
- update=False, # update all models
- project='runs/detect', # save results to project/name
- name='exp', # save results to project/name
- exist_ok=False, # existing project/name ok, do not increment
- line_thickness=3, # bounding box thickness (pixels)
- hide_labels=False, # hide labels
- hide_conf=False, # hide confidences
- half=False, # use FP16 half-precision inference
- ):
- save_img = not nosave and not source.endswith('.txt') # save inference images
- webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(
- ('rtsp://', 'rtmp://', 'http://', 'https://'))
-
- # Directories
- save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Initialize
- set_logging()
- device = select_device(device)
- half &= device.type != 'cpu' # half precision only supported on CUDA
-
- # Load model
- model = attempt_load(weights, map_location=device) # load FP32 model
- stride = int(model.stride.max()) # model stride
- imgsz = check_img_size(imgsz, s=stride) # check image size
- names = model.module.names if hasattr(model, 'module') else model.names # get class names
- if half:
- model.half() # to FP16
-
- # Second-stage classifier
- classify = False
- if classify:
- modelc = load_classifier(name='resnet50', n=2) # initialize
- modelc.load_state_dict(torch.load('resnet50.pt', map_location=device)['model']).to(device).eval()
-
- # Dataloader
- if webcam:
- view_img = check_imshow()
- cudnn.benchmark = True # set True to speed up constant image size inference
- dataset = LoadStreams(source, img_size=imgsz, stride=stride)
- bs = len(dataset) # batch_size
- else:
- dataset = LoadImages(source, img_size=imgsz, stride=stride)
- bs = 1 # batch_size
- vid_path, vid_writer = [None] * bs, [None] * bs
-
- # Run inference
- if device.type != 'cpu':
- model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
- t0 = time.time()
- for path, img, im0s, vid_cap in dataset:
- img = torch.from_numpy(img).to(device)
- img = img.half() if half else img.float() # uint8 to fp16/32
- img /= 255.0 # 0 - 255 to 0.0 - 1.0
- if img.ndimension() == 3:
- img = img.unsqueeze(0)
-
- # Inference
- t1 = time_synchronized()
- pred = model(img,
- augment=augment,
- visualize=increment_path(save_dir / 'features', mkdir=True) if visualize else False)[0]
-
- # Apply NMS
- pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
- t2 = time_synchronized()
-
- # Apply Classifier
- if classify:
- pred = apply_classifier(pred, modelc, img, im0s)
-
- # Process detections
- for i, det in enumerate(pred): # detections per image
- if webcam: # batch_size >= 1
- p, s, im0, frame = path[i], f'{i}: ', im0s[i].copy(), dataset.count
- else:
- p, s, im0, frame = path, '', im0s.copy(), getattr(dataset, 'frame', 0)
-
- p = Path(p) # to Path
- save_path = str(save_dir / p.name) # img.jpg
- txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt
- s += '%gx%g ' % img.shape[2:] # print string
- gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
- imc = im0.copy() if save_crop else im0 # for save_crop
- if len(det):
- # Rescale boxes from img_size to im0 size
- det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
-
- # Print results
- for c in det[:, -1].unique():
- n = (det[:, -1] == c).sum() # detections per class
- s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
-
- # Write results
- for *xyxy, conf, cls in reversed(det):
- if save_txt: # Write to file
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
- with open(txt_path + '.txt', 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- if save_img or save_crop or view_img: # Add bbox to image
- c = int(cls) # integer class
- label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
- plot_one_box(xyxy, im0, label=label, color=colors(c, True), line_thickness=line_thickness)
- if save_crop:
- save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
-
- # Print time (inference + NMS)
- print(f'{s}Done. ({t2 - t1:.3f}s)')
-
- # Stream results
- if view_img:
- cv2.imshow(str(p), im0)
- cv2.waitKey(1) # 1 millisecond
-
- # Save results (image with detections)
- if save_img:
- if dataset.mode == 'image':
- cv2.imwrite(save_path, im0)
- else: # 'video' or 'stream'
- if vid_path[i] != save_path: # new video
- vid_path[i] = save_path
- if isinstance(vid_writer[i], cv2.VideoWriter):
- vid_writer[i].release() # release previous video writer
- if vid_cap: # video
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- else: # stream
- fps, w, h = 30, im0.shape[1], im0.shape[0]
- save_path += '.mp4'
- vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
- vid_writer[i].write(im0)
-
- if save_txt or save_img:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- print(f"Results saved to {save_dir}{s}")
-
- if update:
- strip_optimizer(weights) # update model (to fix SourceChangeWarning)
-
- print(f'Done. ({time.time() - t0:.3f}s)')
-
-
-def parse_opt():
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
- parser.add_argument('--source', type=str, default='data/images', help='file/dir/URL/glob, 0 for webcam')
- parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')
- parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
- parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--view-img', action='store_true', help='show results')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
- parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
- parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
- parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--visualize', action='store_true', help='visualize features')
- parser.add_argument('--update', action='store_true', help='update all models')
- parser.add_argument('--project', default='runs/detect', help='save results to project/name')
- parser.add_argument('--name', default='exp', help='save results to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
- parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
- parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
- parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
- opt = parser.parse_args()
- return opt
-
-
-def main(opt):
- print(colorstr('detect: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
- check_requirements(exclude=('tensorboard', 'thop'))
- run(**vars(opt))
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/cv/detection/yolov5/pytorch/export.py b/cv/detection/yolov5/pytorch/export.py
deleted file mode 100644
index b7ff0748ba936e06942728f0863e48c95e444e12..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/export.py
+++ /dev/null
@@ -1,173 +0,0 @@
-"""Export a YOLOv5 *.pt model to TorchScript, ONNX, CoreML formats
-
-Usage:
- $ python path/to/export.py --weights yolov5s.pt --img 640 --batch 1
-"""
-
-import argparse
-import sys
-import time
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-from torch.utils.mobile_optimizer import optimize_for_mobile
-
-FILE = Path(__file__).absolute()
-sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
-
-from models.common import Conv
-from models.yolo import Detect
-from models.experimental import attempt_load
-from utils.activations import Hardswish, SiLU
-from utils.general import colorstr, check_img_size, check_requirements, file_size, set_logging
-from utils.torch_utils import select_device
-
-
-def run(weights='./yolov5s.pt', # weights path
- img_size=(640, 640), # image (height, width)
- batch_size=1, # batch size
- device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
- include=('torchscript', 'onnx', 'coreml'), # include formats
- half=False, # FP16 half-precision export
- inplace=False, # set YOLOv5 Detect() inplace=True
- train=False, # model.train() mode
- optimize=False, # TorchScript: optimize for mobile
- dynamic=False, # ONNX: dynamic axes
- simplify=False, # ONNX: simplify model
- opset_version=12, # ONNX: opset version
- ):
- t = time.time()
- include = [x.lower() for x in include]
- img_size *= 2 if len(img_size) == 1 else 1 # expand
-
- # Load PyTorch model
- device = select_device(device)
- assert not (device.type == 'cpu' and half), '--half only compatible with GPU export, i.e. use --device 0'
- model = attempt_load(weights, map_location=device) # load FP32 model
- labels = model.names
-
- # Input
- gs = int(max(model.stride)) # grid size (max stride)
- img_size = [check_img_size(x, gs) for x in img_size] # verify img_size are gs-multiples
- img = torch.zeros(batch_size, 3, *img_size).to(device) # image size(1,3,320,192) iDetection
-
- # Update model
- if half:
- img, model = img.half(), model.half() # to FP16
- model.train() if train else model.eval() # training mode = no Detect() layer grid construction
- for k, m in model.named_modules():
- m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
- if isinstance(m, Conv): # assign export-friendly activations
- if isinstance(m.act, nn.Hardswish):
- m.act = Hardswish()
- elif isinstance(m.act, nn.SiLU):
- m.act = SiLU()
- elif isinstance(m, Detect):
- m.inplace = inplace
- m.onnx_dynamic = dynamic
- # m.forward = m.forward_export # assign forward (optional)
-
- for _ in range(2):
- y = model(img) # dry runs
- print(f"\n{colorstr('PyTorch:')} starting from {weights} ({file_size(weights):.1f} MB)")
-
- # TorchScript export -----------------------------------------------------------------------------------------------
- if 'torchscript' in include or 'coreml' in include:
- prefix = colorstr('TorchScript:')
- try:
- print(f'\n{prefix} starting export with torch {torch.__version__}...')
- f = weights.replace('.pt', '.torchscript.pt') # filename
- ts = torch.jit.trace(model, img, strict=False)
- (optimize_for_mobile(ts) if optimize else ts).save(f)
- print(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
- except Exception as e:
- print(f'{prefix} export failure: {e}')
-
- # ONNX export ------------------------------------------------------------------------------------------------------
- if 'onnx' in include:
- prefix = colorstr('ONNX:')
- try:
- import onnx
-
- print(f'{prefix} starting export with onnx {onnx.__version__}...')
- f = weights.replace('.pt', '.onnx') # filename
- torch.onnx.export(model, img, f, verbose=False, opset_version=opset_version,
- training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
- do_constant_folding=not train,
- input_names=['images'],
- output_names=['output'],
- dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640)
- 'output': {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
- } if dynamic else None)
-
- # Checks
- model_onnx = onnx.load(f) # load onnx model
- onnx.checker.check_model(model_onnx) # check onnx model
- # print(onnx.helper.printable_graph(model_onnx.graph)) # print
-
- # Simplify
- if simplify:
- try:
- check_requirements(['onnx-simplifier'])
- import onnxsim
-
- print(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
- model_onnx, check = onnxsim.simplify(
- model_onnx,
- dynamic_input_shape=dynamic,
- input_shapes={'images': list(img.shape)} if dynamic else None)
- assert check, 'assert check failed'
- onnx.save(model_onnx, f)
- except Exception as e:
- print(f'{prefix} simplifier failure: {e}')
- print(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
- except Exception as e:
- print(f'{prefix} export failure: {e}')
-
- # CoreML export ----------------------------------------------------------------------------------------------------
- if 'coreml' in include:
- prefix = colorstr('CoreML:')
- try:
- import coremltools as ct
-
- print(f'{prefix} starting export with coremltools {ct.__version__}...')
- assert train, 'CoreML exports should be placed in model.train() mode with `python export.py --train`'
- model = ct.convert(ts, inputs=[ct.ImageType('image', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])
- f = weights.replace('.pt', '.mlmodel') # filename
- model.save(f)
- print(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
- except Exception as e:
- print(f'{prefix} export failure: {e}')
-
- # Finish
- print(f'\nExport complete ({time.time() - t:.2f}s). Visualize with https://github.com/lutzroeder/netron.')
-
-
-def parse_opt():
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='./yolov5s.pt', help='weights path')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image (height, width)')
- parser.add_argument('--batch-size', type=int, default=1, help='batch size')
- parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--include', nargs='+', default=['torchscript', 'onnx', 'coreml'], help='include formats')
- parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
- parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
- parser.add_argument('--train', action='store_true', help='model.train() mode')
- parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
- parser.add_argument('--dynamic', action='store_true', help='ONNX: dynamic axes')
- parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
- parser.add_argument('--opset-version', type=int, default=12, help='ONNX: opset version')
- opt = parser.parse_args()
- return opt
-
-
-def main(opt):
- set_logging()
- print(colorstr('export: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
- run(**vars(opt))
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/cv/detection/yolov5/pytorch/get_num_devices.sh b/cv/detection/yolov5/pytorch/get_num_devices.sh
deleted file mode 100644
index 9c37beae6243421aed9b13bb2de4c1069dcd1cf7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/get_num_devices.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-devices=$CUDA_VISIBLE_DEVICES
-if [ -n "$devices" ]; then
- _devices=(${devices//,/ })
- num_devices=${#_devices[@]}
-else
- num_devices=2
- export CUDA_VISIBLE_DEVICES=0,1
- echo "Not found CUDA_VISIBLE_DEVICES, set nproc_per_node = ${num_devices}"
-fi
-export IX_NUM_CUDA_VISIBLE_DEVICES=${num_devices}
\ No newline at end of file
diff --git a/cv/detection/yolov5/pytorch/hubconf.py b/cv/detection/yolov5/pytorch/hubconf.py
deleted file mode 100644
index 55536c3a42f36e05aceffe9def52570911a6f62c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/hubconf.py
+++ /dev/null
@@ -1,127 +0,0 @@
-"""YOLOv5 PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/
-
-Usage:
- import torch
- model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
-"""
-
-import torch
-
-
-def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- """Creates a specified YOLOv5 model
-
- Arguments:
- name (str): name of model, i.e. 'yolov5s'
- pretrained (bool): load pretrained weights into the model
- channels (int): number of input channels
- classes (int): number of model classes
- autoshape (bool): apply YOLOv5 .autoshape() wrapper to model
- verbose (bool): print all information to screen
- device (str, torch.device, None): device to use for model parameters
-
- Returns:
- YOLOv5 pytorch model
- """
- from pathlib import Path
-
- from models.yolo import Model, attempt_load
- from utils.general import check_requirements, set_logging
- from utils.google_utils import attempt_download
- from utils.torch_utils import select_device
-
- file = Path(__file__).absolute()
- check_requirements(requirements=file.parent / 'requirements.txt', exclude=('tensorboard', 'thop', 'opencv-python'))
- set_logging(verbose=verbose)
-
- save_dir = Path('') if str(name).endswith('.pt') else file.parent
- path = (save_dir / name).with_suffix('.pt') # checkpoint path
- try:
- device = select_device(('0' if torch.cuda.is_available() else 'cpu') if device is None else device)
-
- if pretrained and channels == 3 and classes == 80:
- model = attempt_load(path, map_location=device) # download/load FP32 model
- else:
- cfg = list((Path(__file__).parent / 'models').rglob(f'{name}.yaml'))[0] # model.yaml path
- model = Model(cfg, channels, classes) # create model
- if pretrained:
- ckpt = torch.load(attempt_download(path), map_location=device) # load
- msd = model.state_dict() # model state_dict
- csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
- csd = {k: v for k, v in csd.items() if msd[k].shape == v.shape} # filter
- model.load_state_dict(csd, strict=False) # load
- if len(ckpt['model'].names) == classes:
- model.names = ckpt['model'].names # set class names attribute
- if autoshape:
- model = model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
- return model.to(device)
-
- except Exception as e:
- help_url = 'https://github.com/ultralytics/yolov5/issues/36'
- s = 'Cache may be out of date, try `force_reload=True`. See %s for help.' % help_url
- raise Exception(s) from e
-
-
-def custom(path='path/to/model.pt', autoshape=True, verbose=True, device=None):
- # YOLOv5 custom or local model
- return _create(path, autoshape=autoshape, verbose=verbose, device=device)
-
-
-def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-small model https://github.com/ultralytics/yolov5
- return _create('yolov5s', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-medium model https://github.com/ultralytics/yolov5
- return _create('yolov5m', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-large model https://github.com/ultralytics/yolov5
- return _create('yolov5l', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-xlarge model https://github.com/ultralytics/yolov5
- return _create('yolov5x', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5
- return _create('yolov5s6', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5
- return _create('yolov5m6', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5
- return _create('yolov5l6', pretrained, channels, classes, autoshape, verbose, device)
-
-
-def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
- # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5
- return _create('yolov5x6', pretrained, channels, classes, autoshape, verbose, device)
-
-
-if __name__ == '__main__':
- model = _create(name='yolov5s', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) # pretrained
- # model = custom(path='path/to/model.pt') # custom
-
- # Verify inference
- import cv2
- import numpy as np
- from PIL import Image
-
- imgs = ['data/images/zidane.jpg', # filename
- 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg', # URI
- cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV
- Image.open('data/images/bus.jpg'), # PIL
- np.zeros((320, 640, 3))] # numpy
-
- results = model(imgs) # batched inference
- results.print()
- results.save()
diff --git a/cv/detection/yolov5/pytorch/init.sh b/cv/detection/yolov5/pytorch/init.sh
deleted file mode 100644
index 80e367cb53d9f6fcfcea645d194ae391c274e175..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/init.sh
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-if [[ "$(uname)" == "Linux" ]]; then
- if command -v apt &> /dev/null; then
- apt install -y numactl libgl1-mesa-dev
- elif command -v yum &> /dev/null; then
- yum install -y numactl mesa-libGL
- else
- echo "Unsupported package manager"
- exit 1
- fi
-else
- echo "Unsupported operating system"
- exit 1
-fi
-
-pip3 install -r requirements.txt
-PY_VERSION=$(python3 -V 2>&1|awk '{print $2}'|awk -F '.' '{print $2}')
-if [ "$PY_VERSION" == "10" ]; then
- pip3 install matplotlib==3.8.2
- pip3 install numpy==1.22.4
- pip3 install Pillow==9.5
-else
- echo "only for python3.10"
-fi
-
-wandb disabled
-pip3 install pycocotools
diff --git a/cv/detection/yolov5/pytorch/models/__init__.py b/cv/detection/yolov5/pytorch/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov5/pytorch/models/common.py b/cv/detection/yolov5/pytorch/models/common.py
deleted file mode 100644
index 7603e9e4719202e9143bcfd80984bf92128085d9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/common.py
+++ /dev/null
@@ -1,390 +0,0 @@
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-
-# YOLOv5 common modules
-
-from copy import copy
-from pathlib import Path
-
-import math
-import numpy as np
-import pandas as pd
-import requests
-import torch
-import torch.nn as nn
-from PIL import Image
-from torch.cuda import amp
-
-from utils.datasets import exif_transpose, letterbox
-from utils.general import non_max_suppression, make_divisible, scale_coords, increment_path, xyxy2xywh, save_one_box
-from utils.plots import colors, plot_one_box
-from utils.torch_utils import time_synchronized
-from utils.activations import SiLU
-
-def autopad(k, p=None): # kernel, padding
- # Pad to 'same'
- if p is None:
- p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
- return p
-
-
-def DWConv(c1, c2, k=1, s=1, act=True):
- # Depthwise convolution
- return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
-
-
-class Conv(nn.Module):
- # Standard convolution
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super(Conv, self).__init__()
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
- #self.act = SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
-
- def forward(self, x):
- return self.act(self.bn(self.conv(x)))
-
- def fuseforward(self, x):
- return self.act(self.conv(x))
-
-
-class TransformerLayer(nn.Module):
- # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
- def __init__(self, c, num_heads):
- super().__init__()
- self.q = nn.Linear(c, c, bias=False)
- self.k = nn.Linear(c, c, bias=False)
- self.v = nn.Linear(c, c, bias=False)
- self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
- self.fc1 = nn.Linear(c, c, bias=False)
- self.fc2 = nn.Linear(c, c, bias=False)
-
- def forward(self, x):
- x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
- x = self.fc2(self.fc1(x)) + x
- return x
-
-
-class TransformerBlock(nn.Module):
- # Vision Transformer https://arxiv.org/abs/2010.11929
- def __init__(self, c1, c2, num_heads, num_layers):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
- self.linear = nn.Linear(c2, c2) # learnable position embedding
- self.tr = nn.Sequential(*[TransformerLayer(c2, num_heads) for _ in range(num_layers)])
- self.c2 = c2
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- b, _, w, h = x.shape
- p = x.flatten(2).unsqueeze(0).transpose(0, 3).squeeze(3)
- return self.tr(p + self.linear(p)).unsqueeze(3).transpose(0, 3).reshape(b, self.c2, w, h)
-
-
-class Bottleneck(nn.Module):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super(Bottleneck, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c2, 3, 1, g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class BottleneckCSP(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(BottleneckCSP, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
- self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
- self.act = nn.LeakyReLU(0.1, inplace=True)
- self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))
-
-
-class C3(nn.Module):
- # CSP Bottleneck with 3 convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(C3, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2)
- self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
- # self.m = nn.Sequential(*[CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)])
-
- def forward(self, x):
- return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
-
-
-class C3TR(C3):
- # C3 module with TransformerBlock()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = TransformerBlock(c_, c_, 4, n)
-
-
-class SPP(nn.Module):
- # Spatial pyramid pooling layer used in YOLOv3-SPP
- def __init__(self, c1, c2, k=(5, 9, 13)):
- super(SPP, self).__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
- self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
-
- def forward(self, x):
- x = self.cv1(x)
- return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
-
-
-class Focus(nn.Module):
- # Focus wh information into c-space
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super(Focus, self).__init__()
- self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
- # self.contract = Contract(gain=2)
-
- def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
- return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
- # return self.conv(self.contract(x))
-
-
-class Contract(nn.Module):
- # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain'
- s = self.gain
- x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2)
- x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
- return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40)
-
-
-class Expand(nn.Module):
- # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- N, C, H, W = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
- s = self.gain
- x = x.view(N, s, s, C // s ** 2, H, W) # x(1,2,2,16,80,80)
- x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
- return x.view(N, C // s ** 2, H * s, W * s) # x(1,16,160,160)
-
-
-class Concat(nn.Module):
- # Concatenate a list of tensors along dimension
- def __init__(self, dimension=1):
- super(Concat, self).__init__()
- self.d = dimension
-
- def forward(self, x):
- return torch.cat(x, self.d)
-
-
-class NMS(nn.Module):
- # Non-Maximum Suppression (NMS) module
- conf = 0.25 # confidence threshold
- iou = 0.45 # IoU threshold
- classes = None # (optional list) filter by class
- max_det = 1000 # maximum number of detections per image
-
- def __init__(self):
- super(NMS, self).__init__()
-
- def forward(self, x):
- return non_max_suppression(x[0], self.conf, iou_thres=self.iou, classes=self.classes, max_det=self.max_det)
-
-
-class AutoShape(nn.Module):
- # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
- conf = 0.25 # NMS confidence threshold
- iou = 0.45 # NMS IoU threshold
- classes = None # (optional list) filter by class
- max_det = 1000 # maximum number of detections per image
-
- def __init__(self, model):
- super(AutoShape, self).__init__()
- self.model = model.eval()
-
- def autoshape(self):
- print('AutoShape already enabled, skipping... ') # model already converted to model.autoshape()
- return self
-
- @torch.no_grad()
- def forward(self, imgs, size=640, augment=False, profile=False):
- # Inference from various sources. For height=640, width=1280, RGB images example inputs are:
- # filename: imgs = 'data/images/zidane.jpg'
- # URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg'
- # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
- # PIL: = Image.open('image.jpg') # HWC x(640,1280,3)
- # numpy: = np.zeros((640,1280,3)) # HWC
- # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
- # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
-
- t = [time_synchronized()]
- p = next(self.model.parameters()) # for device and type
- if isinstance(imgs, torch.Tensor): # torch
- with amp.autocast(enabled=p.device.type != 'cpu'):
- return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
-
- # Pre-process
- n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images
- shape0, shape1, files = [], [], [] # image and inference shapes, filenames
- for i, im in enumerate(imgs):
- f = f'image{i}' # filename
- if isinstance(im, str): # filename or uri
- im, f = Image.open(requests.get(im, stream=True).raw if im.startswith('http') else im), im
- im = np.asarray(exif_transpose(im))
- elif isinstance(im, Image.Image): # PIL Image
- im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
- files.append(Path(f).with_suffix('.jpg').name)
- if im.shape[0] < 5: # image in CHW
- im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
- im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input
- s = im.shape[:2] # HWC
- shape0.append(s) # image shape
- g = (size / max(s)) # gain
- shape1.append([y * g for y in s])
- imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
- shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape
- x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad
- x = np.stack(x, 0) if n > 1 else x[0][None] # stack
- x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW
- x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32
- t.append(time_synchronized())
-
- with amp.autocast(enabled=p.device.type != 'cpu'):
- # Inference
- y = self.model(x, augment, profile)[0] # forward
- t.append(time_synchronized())
-
- # Post-process
- y = non_max_suppression(y, self.conf, iou_thres=self.iou, classes=self.classes, max_det=self.max_det) # NMS
- for i in range(n):
- scale_coords(shape1, y[i][:, :4], shape0[i])
-
- t.append(time_synchronized())
- return Detections(imgs, y, files, t, self.names, x.shape)
-
-
-class Detections:
- # detections class for YOLOv5 inference results
- def __init__(self, imgs, pred, files, times=None, names=None, shape=None):
- super(Detections, self).__init__()
- d = pred[0].device # device
- gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations
- self.imgs = imgs # list of images as numpy arrays
- self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
- self.names = names # class names
- self.files = files # image filenames
- self.xyxy = pred # xyxy pixels
- self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
- self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
- self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
- self.n = len(self.pred) # number of images (batch size)
- self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
- self.s = shape # inference BCHW shape
-
- def display(self, pprint=False, show=False, save=False, crop=False, render=False, save_dir=Path('')):
- for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
- str = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} '
- if pred is not None:
- for c in pred[:, -1].unique():
- n = (pred[:, -1] == c).sum() # detections per class
- str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
- if show or save or render or crop:
- for *box, conf, cls in reversed(pred): # xyxy, confidence, class
- label = f'{self.names[int(cls)]} {conf:.2f}'
- if crop:
- save_one_box(box, im, file=save_dir / 'crops' / self.names[int(cls)] / self.files[i])
- else: # all others
- plot_one_box(box, im, label=label, color=colors(cls))
-
- im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
- if pprint:
- print(str.rstrip(', '))
- if show:
- im.show(self.files[i]) # show
- if save:
- f = self.files[i]
- im.save(save_dir / f) # save
- print(f"{'Saved' * (i == 0)} {f}", end=',' if i < self.n - 1 else f' to {save_dir}\n')
- if render:
- self.imgs[i] = np.asarray(im)
-
- def print(self):
- self.display(pprint=True) # print results
- print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t)
-
- def show(self):
- self.display(show=True) # show results
-
- def save(self, save_dir='runs/hub/exp'):
- save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/hub/exp', mkdir=True) # increment save_dir
- self.display(save=True, save_dir=save_dir) # save results
-
- def crop(self, save_dir='runs/hub/exp'):
- save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/hub/exp', mkdir=True) # increment save_dir
- self.display(crop=True, save_dir=save_dir) # crop results
- print(f'Saved results to {save_dir}\n')
-
- def render(self):
- self.display(render=True) # render results
- return self.imgs
-
- def pandas(self):
- # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
- new = copy(self) # return copy
- ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
- cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
- for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
- a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
- setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
- return new
-
- def tolist(self):
- # return a list of Detections objects, i.e. 'for result in results.tolist():'
- x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)]
- for d in x:
- for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
- setattr(d, k, getattr(d, k)[0]) # pop out of list
- return x
-
- def __len__(self):
- return self.n
-
-
-class Classify(nn.Module):
- # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
- super(Classify, self).__init__()
- self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
- self.flat = nn.Flatten()
-
- def forward(self, x):
- z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
- return self.flat(self.conv(z)) # flatten to x(b,c2)
diff --git a/cv/detection/yolov5/pytorch/models/experimental.py b/cv/detection/yolov5/pytorch/models/experimental.py
deleted file mode 100644
index d316b18373c3e536f4c0980cff9079942044e5a2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/experimental.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# YOLOv5 experimental modules
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-from models.common import Conv, DWConv
-from utils.google_utils import attempt_download
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super(CrossConv, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class Sum(nn.Module):
- # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
- def __init__(self, n, weight=False): # n: number of inputs
- super(Sum, self).__init__()
- self.weight = weight # apply weights boolean
- self.iter = range(n - 1) # iter object
- if weight:
- self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
-
- def forward(self, x):
- y = x[0] # no weight
- if self.weight:
- w = torch.sigmoid(self.w) * 2
- for i in self.iter:
- y = y + x[i + 1] * w[i]
- else:
- for i in self.iter:
- y = y + x[i + 1]
- return y
-
-
-class GhostConv(nn.Module):
- # Ghost Convolution https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
- super(GhostConv, self).__init__()
- c_ = c2 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, k, s, None, g, act)
- self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
-
- def forward(self, x):
- y = self.cv1(x)
- return torch.cat([y, self.cv2(y)], 1)
-
-
-class GhostBottleneck(nn.Module):
- # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
- super(GhostBottleneck, self).__init__()
- c_ = c2 // 2
- self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
- DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
- GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
- self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
- Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
-
- def forward(self, x):
- return self.conv(x) + self.shortcut(x)
-
-
-class MixConv2d(nn.Module):
- # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
- def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
- super(MixConv2d, self).__init__()
- groups = len(k)
- if equal_ch: # equal c_ per group
- i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
- c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
- else: # equal weight.numel() per group
- b = [c2] + [0] * groups
- a = np.eye(groups + 1, groups, k=-1)
- a -= np.roll(a, 1, axis=1)
- a *= np.array(k) ** 2
- a[0] = 1
- c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
-
- self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.LeakyReLU(0.1, inplace=True)
-
- def forward(self, x):
- return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
-
-
-class Ensemble(nn.ModuleList):
- # Ensemble of models
- def __init__(self):
- super(Ensemble, self).__init__()
-
- def forward(self, x, augment=False):
- y = []
- for module in self:
- y.append(module(x, augment)[0])
- # y = torch.stack(y).max(0)[0] # max ensemble
- # y = torch.stack(y).mean(0) # mean ensemble
- y = torch.cat(y, 1) # nms ensemble
- return y, None # inference, train output
-
-
-def attempt_load(weights, map_location=None, inplace=True):
- from models.yolo import Detect, Model
-
- # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
- model = Ensemble()
- for w in weights if isinstance(weights, list) else [weights]:
- ckpt = torch.load(attempt_download(w), map_location=map_location) # load
- model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
-
- # Compatibility updates
- for m in model.modules():
- if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]:
- m.inplace = inplace # pytorch 1.7.0 compatibility
- elif type(m) is Conv:
- m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
-
- if len(model) == 1:
- return model[-1] # return model
- else:
- print(f'Ensemble created with {weights}\n')
- for k in ['names']:
- setattr(model, k, getattr(model[-1], k))
- model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
- return model # return ensemble
diff --git a/cv/detection/yolov5/pytorch/models/hub/anchors.yaml b/cv/detection/yolov5/pytorch/models/hub/anchors.yaml
deleted file mode 100644
index a07a4dc72387ef79ee0d473ac1055d23c6543ee9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/anchors.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-# Default YOLOv5 anchors for COCO data
-
-
-# P5 -------------------------------------------------------------------------------------------------------------------
-# P5-640:
-anchors_p5_640:
- - [ 10,13, 16,30, 33,23 ] # P3/8
- - [ 30,61, 62,45, 59,119 ] # P4/16
- - [ 116,90, 156,198, 373,326 ] # P5/32
-
-
-# P6 -------------------------------------------------------------------------------------------------------------------
-# P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
-anchors_p6_640:
- - [ 9,11, 21,19, 17,41 ] # P3/8
- - [ 43,32, 39,70, 86,64 ] # P4/16
- - [ 65,131, 134,130, 120,265 ] # P5/32
- - [ 282,180, 247,354, 512,387 ] # P6/64
-
-# P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
-anchors_p6_1280:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
-anchors_p6_1920:
- - [ 28,41, 67,59, 57,141 ] # P3/8
- - [ 144,103, 129,227, 270,205 ] # P4/16
- - [ 209,452, 455,396, 358,812 ] # P5/32
- - [ 653,922, 1109,570, 1387,1187 ] # P6/64
-
-
-# P7 -------------------------------------------------------------------------------------------------------------------
-# P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
-anchors_p7_640:
- - [ 11,11, 13,30, 29,20 ] # P3/8
- - [ 30,46, 61,38, 39,92 ] # P4/16
- - [ 78,80, 146,66, 79,163 ] # P5/32
- - [ 149,150, 321,143, 157,303 ] # P6/64
- - [ 257,402, 359,290, 524,372 ] # P7/128
-
-# P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
-anchors_p7_1280:
- - [ 19,22, 54,36, 32,77 ] # P3/8
- - [ 70,83, 138,71, 75,173 ] # P4/16
- - [ 165,159, 148,334, 375,151 ] # P5/32
- - [ 334,317, 251,626, 499,474 ] # P6/64
- - [ 750,326, 534,814, 1079,818 ] # P7/128
-
-# P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
-anchors_p7_1920:
- - [ 29,34, 81,55, 47,115 ] # P3/8
- - [ 105,124, 207,107, 113,259 ] # P4/16
- - [ 247,238, 222,500, 563,227 ] # P5/32
- - [ 501,476, 376,939, 749,711 ] # P6/64
- - [ 1126,489, 801,1222, 1618,1227 ] # P7/128
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov3-spp.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov3-spp.yaml
deleted file mode 100644
index 0ca7b7f6577b94afbc7796379031c8716b5261e8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov3-spp.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [ 10,13, 16,30, 33,23 ] # P3/8
- - [ 30,61, 62,45, 59,119 ] # P4/16
- - [ 116,90, 156,198, 373,326 ] # P5/32
-
-# darknet53 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Conv, [ 32, 3, 1 ] ], # 0
- [ -1, 1, Conv, [ 64, 3, 2 ] ], # 1-P1/2
- [ -1, 1, Bottleneck, [ 64 ] ],
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 3-P2/4
- [ -1, 2, Bottleneck, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 5-P3/8
- [ -1, 8, Bottleneck, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 7-P4/16
- [ -1, 8, Bottleneck, [ 512 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P5/32
- [ -1, 4, Bottleneck, [ 1024 ] ], # 10
- ]
-
-# YOLOv3-SPP head
-head:
- [ [ -1, 1, Bottleneck, [ 1024, False ] ],
- [ -1, 1, SPP, [ 512, [ 5, 9, 13 ] ] ],
- [ -1, 1, Conv, [ 1024, 3, 1 ] ],
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, Conv, [ 1024, 3, 1 ] ], # 15 (P5/32-large)
-
- [ -2, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 1, Bottleneck, [ 512, False ] ],
- [ -1, 1, Bottleneck, [ 512, False ] ],
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, Conv, [ 512, 3, 1 ] ], # 22 (P4/16-medium)
-
- [ -2, 1, Conv, [ 128, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 1, Bottleneck, [ 256, False ] ],
- [ -1, 2, Bottleneck, [ 256, False ] ], # 27 (P3/8-small)
-
- [ [ 27, 22, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov3-tiny.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov3-tiny.yaml
deleted file mode 100644
index d39a6b1f581c9372698ab39991d1d7f33f1f7181..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov3-tiny.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [ 10,14, 23,27, 37,58 ] # P4/16
- - [ 81,82, 135,169, 344,319 ] # P5/32
-
-# YOLOv3-tiny backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Conv, [ 16, 3, 1 ] ], # 0
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 1-P1/2
- [ -1, 1, Conv, [ 32, 3, 1 ] ],
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 3-P2/4
- [ -1, 1, Conv, [ 64, 3, 1 ] ],
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 5-P3/8
- [ -1, 1, Conv, [ 128, 3, 1 ] ],
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 7-P4/16
- [ -1, 1, Conv, [ 256, 3, 1 ] ],
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 9-P5/32
- [ -1, 1, Conv, [ 512, 3, 1 ] ],
- [ -1, 1, nn.ZeroPad2d, [ [ 0, 1, 0, 1 ] ] ], # 11
- [ -1, 1, nn.MaxPool2d, [ 2, 1, 0 ] ], # 12
- ]
-
-# YOLOv3-tiny head
-head:
- [ [ -1, 1, Conv, [ 1024, 3, 1 ] ],
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, Conv, [ 512, 3, 1 ] ], # 15 (P5/32-large)
-
- [ -2, 1, Conv, [ 128, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 1, Conv, [ 256, 3, 1 ] ], # 19 (P4/16-medium)
-
- [ [ 19, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov3.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov3.yaml
deleted file mode 100644
index 09df0d9ef36245a389eaed6aefa289fb98adba94..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov3.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [ 10,13, 16,30, 33,23 ] # P3/8
- - [ 30,61, 62,45, 59,119 ] # P4/16
- - [ 116,90, 156,198, 373,326 ] # P5/32
-
-# darknet53 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Conv, [ 32, 3, 1 ] ], # 0
- [ -1, 1, Conv, [ 64, 3, 2 ] ], # 1-P1/2
- [ -1, 1, Bottleneck, [ 64 ] ],
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 3-P2/4
- [ -1, 2, Bottleneck, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 5-P3/8
- [ -1, 8, Bottleneck, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 7-P4/16
- [ -1, 8, Bottleneck, [ 512 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P5/32
- [ -1, 4, Bottleneck, [ 1024 ] ], # 10
- ]
-
-# YOLOv3 head
-head:
- [ [ -1, 1, Bottleneck, [ 1024, False ] ],
- [ -1, 1, Conv, [ 512, [ 1, 1 ] ] ],
- [ -1, 1, Conv, [ 1024, 3, 1 ] ],
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, Conv, [ 1024, 3, 1 ] ], # 15 (P5/32-large)
-
- [ -2, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 1, Bottleneck, [ 512, False ] ],
- [ -1, 1, Bottleneck, [ 512, False ] ],
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, Conv, [ 512, 3, 1 ] ], # 22 (P4/16-medium)
-
- [ -2, 1, Conv, [ 128, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 1, Bottleneck, [ 256, False ] ],
- [ -1, 2, Bottleneck, [ 256, False ] ], # 27 (P3/8-small)
-
- [ [ 27, 22, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5-fpn.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5-fpn.yaml
deleted file mode 100644
index b8b7fc1a23d45917f0e1837d1c103ddc535fd6e5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5-fpn.yaml
+++ /dev/null
@@ -1,40 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [ 10,13, 16,30, 33,23 ] # P3/8
- - [ 30,61, 62,45, 59,119 ] # P4/16
- - [ 116,90, 156,198, 373,326 ] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, Bottleneck, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, BottleneckCSP, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, BottleneckCSP, [ 512 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
- [ -1, 6, BottleneckCSP, [ 1024 ] ], # 9
- ]
-
-# YOLOv5 FPN head
-head:
- [ [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 10 (P5/32-large)
-
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 3, BottleneckCSP, [ 512, False ] ], # 14 (P4/16-medium)
-
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 3, BottleneckCSP, [ 256, False ] ], # 18 (P3/8-small)
-
- [ [ 18, 14, 10 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5-p2.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5-p2.yaml
deleted file mode 100644
index 62122363df2d3bb5eba93fbe1cca6cb23649f609..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5-p2.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors: 3
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
- [ -1, 3, C3, [ 1024, False ] ], # 9
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 13
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small)
-
- [ -1, 1, Conv, [ 128, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 2 ], 1, Concat, [ 1 ] ], # cat backbone P2
- [ -1, 1, C3, [ 128, False ] ], # 21 (P2/4-xsmall)
-
- [ -1, 1, Conv, [ 128, 3, 2 ] ],
- [ [ -1, 18 ], 1, Concat, [ 1 ] ], # cat head P3
- [ -1, 3, C3, [ 256, False ] ], # 24 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 27 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 1024, False ] ], # 30 (P5/32-large)
-
- [ [ 24, 27, 30 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5-p6.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5-p6.yaml
deleted file mode 100644
index c5ef5177f0c8eb289b63a0af9b3b3dfebb1d0ad3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5-p6.yaml
+++ /dev/null
@@ -1,54 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors: 3
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
- [ -1, 3, C3, [ 768 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
- [ -1, 3, C3, [ 1024, False ] ], # 11
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
- [ -1, 3, C3, [ 768, False ] ], # 15
-
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 19
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
-
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P5/64-xlarge)
-
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5-p7.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5-p7.yaml
deleted file mode 100644
index 505c590ca168ba79650283e463a55cc6d5edcff5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5-p7.yaml
+++ /dev/null
@@ -1,65 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors: 3
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
- [ -1, 3, C3, [ 768 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
- [ -1, 3, C3, [ 1024 ] ],
- [ -1, 1, Conv, [ 1280, 3, 2 ] ], # 11-P7/128
- [ -1, 1, SPP, [ 1280, [ 3, 5 ] ] ],
- [ -1, 3, C3, [ 1280, False ] ], # 13
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 1024, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat backbone P6
- [ -1, 3, C3, [ 1024, False ] ], # 17
-
- [ -1, 1, Conv, [ 768, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
- [ -1, 3, C3, [ 768, False ] ], # 21
-
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 25
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 29 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 26 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 32 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 22 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 768, False ] ], # 35 (P5/32-large)
-
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
- [ [ -1, 18 ], 1, Concat, [ 1 ] ], # cat head P6
- [ -1, 3, C3, [ 1024, False ] ], # 38 (P6/64-xlarge)
-
- [ -1, 1, Conv, [ 1024, 3, 2 ] ],
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P7
- [ -1, 3, C3, [ 1280, False ] ], # 41 (P7/128-xxlarge)
-
- [ [ 29, 32, 35, 38, 41 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6, P7)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5-panet.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5-panet.yaml
deleted file mode 100644
index aee5dab01fa176807c5f54373c0726a70b58bc0b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5-panet.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [ 10,13, 16,30, 33,23 ] # P3/8
- - [ 30,61, 62,45, 59,119 ] # P4/16
- - [ 116,90, 156,198, 373,326 ] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, BottleneckCSP, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, BottleneckCSP, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, BottleneckCSP, [ 512 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
- [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 9
- ]
-
-# YOLOv5 PANet head
-head:
- [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, BottleneckCSP, [ 512, False ] ], # 13
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, BottleneckCSP, [ 256, False ] ], # 17 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, BottleneckCSP, [ 512, False ] ], # 20 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 23 (P5/32-large)
-
- [ [ 17, 20, 23 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5l6.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5l6.yaml
deleted file mode 100644
index 91c57da1939ed439b7584559a5a968be03319377..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5l6.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
- [ -1, 3, C3, [ 768 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
- [ -1, 3, C3, [ 1024, False ] ], # 11
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
- [ -1, 3, C3, [ 768, False ] ], # 15
-
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 19
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
-
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
-
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5m6.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5m6.yaml
deleted file mode 100644
index 4bef2e074a96ddca593ec805b3c9e7fe7dd4f9c7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5m6.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 0.67 # model depth multiple
-width_multiple: 0.75 # layer channel multiple
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
- [ -1, 3, C3, [ 768 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
- [ -1, 3, C3, [ 1024, False ] ], # 11
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
- [ -1, 3, C3, [ 768, False ] ], # 15
-
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 19
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
-
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
-
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5s-transformer.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5s-transformer.yaml
deleted file mode 100644
index 8023ba480d24d29bb44a576c3ab78cb58607e3ae..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5s-transformer.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 0.33 # model depth multiple
-width_multiple: 0.50 # layer channel multiple
-anchors:
- - [ 10,13, 16,30, 33,23 ] # P3/8
- - [ 30,61, 62,45, 59,119 ] # P4/16
- - [ 116,90, 156,198, 373,326 ] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
- [ -1, 3, C3TR, [ 1024, False ] ], # 9 <-------- C3TR() Transformer module
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 13
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 20 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 1024, False ] ], # 23 (P5/32-large)
-
- [ [ 17, 20, 23 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5s6.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5s6.yaml
deleted file mode 100644
index ba1025ec87ad38a2f66682a86f593bd5feeb7583..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5s6.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 0.33 # model depth multiple
-width_multiple: 0.50 # layer channel multiple
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
- [ -1, 3, C3, [ 768 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
- [ -1, 3, C3, [ 1024, False ] ], # 11
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
- [ -1, 3, C3, [ 768, False ] ], # 15
-
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 19
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
-
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
-
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/hub/yolov5x6.yaml b/cv/detection/yolov5/pytorch/models/hub/yolov5x6.yaml
deleted file mode 100644
index 4fc9c9a119b80caa3215b27156c58db9f3946f5c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/hub/yolov5x6.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.33 # model depth multiple
-width_multiple: 1.25 # layer channel multiple
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
- [ -1, 3, C3, [ 128 ] ],
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
- [ -1, 9, C3, [ 256 ] ],
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
- [ -1, 9, C3, [ 512 ] ],
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
- [ -1, 3, C3, [ 768 ] ],
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
- [ -1, 3, C3, [ 1024, False ] ], # 11
- ]
-
-# YOLOv5 head
-head:
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
- [ -1, 3, C3, [ 768, False ] ], # 15
-
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
- [ -1, 3, C3, [ 512, False ] ], # 19
-
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
-
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
-
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
-
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
-
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/yolo.py b/cv/detection/yolov5/pytorch/models/yolo.py
deleted file mode 100644
index b11443377080f0232dc0295e85e0fa3431b077f5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/yolo.py
+++ /dev/null
@@ -1,313 +0,0 @@
-"""YOLOv5-specific modules
-
-Usage:
- $ python path/to/models/yolo.py --cfg yolov5s.yaml
-"""
-
-import argparse
-import logging
-import sys
-from copy import deepcopy
-from pathlib import Path
-
-FILE = Path(__file__).absolute()
-sys.path.append(FILE.parents[1].as_posix()) # add yolov5/ to path
-
-from models.common import *
-from models.experimental import *
-from utils.autoanchor import check_anchor_order
-from utils.general import make_divisible, check_file, set_logging
-from utils.plots import feature_visualization
-from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \
- select_device, copy_attr
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-
-logger = logging.getLogger(__name__)
-
-
-class Detect(nn.Module):
- stride = None # strides computed during build
- onnx_dynamic = False # ONNX export parameter
-
- def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
- super(Detect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
- self.inplace = inplace # use in-place ops (e.g. slice assignment)
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if self.inplace:
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh
- y = torch.cat((xy, wh, y[..., 4:]), -1)
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class Model(nn.Module):
- def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
- super(Model, self).__init__()
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg) as f:
- self.yaml = yaml.safe_load(f) # model dict
-
- # Define model
- ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
- if nc and nc != self.yaml['nc']:
- logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- if anchors:
- logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
- self.yaml['anchors'] = round(anchors) # override yaml value
- self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
- self.names = [str(i) for i in range(self.yaml['nc'])] # default names
- self.inplace = self.yaml.get('inplace', True)
- # logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
-
- # Build strides, anchors
- m = self.model[-1] # Detect()
- if isinstance(m, Detect):
- s = 256 # 2x min stride
- m.inplace = self.inplace
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- m.anchors /= m.stride.view(-1, 1, 1)
- check_anchor_order(m)
- self.stride = m.stride
- self._initialize_biases() # only run once
- # logger.info('Strides: %s' % m.stride.tolist())
-
- # Init weights, biases
- initialize_weights(self)
- self.info()
- logger.info('')
-
- def forward(self, x, augment=False, profile=False, visualize=False):
- if augment:
- return self.forward_augment(x) # augmented inference, None
- return self.forward_once(x, profile, visualize) # single-scale inference, train
-
- def forward_augment(self, x):
- img_size = x.shape[-2:] # height, width
- s = [1, 0.83, 0.67] # scales
- f = [None, 3, None] # flips (2-ud, 3-lr)
- y = [] # outputs
- for si, fi in zip(s, f):
- xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
- yi = self.forward_once(xi)[0] # forward
- # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
- yi = self._descale_pred(yi, fi, si, img_size)
- y.append(yi)
- return torch.cat(y, 1), None # augmented inference, train
-
- def forward_once(self, x, profile=False, visualize=False):
- y, dt = [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
-
- if profile:
- o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
- t = time_synchronized()
- for _ in range(10):
- _ = m(x)
- dt.append((time_synchronized() - t) * 100)
- if m == self.model[0]:
- logger.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}")
- logger.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
-
- x = m(x) # run
- y.append(x if m.i in self.save else None) # save output
-
- if visualize:
- feature_visualization(x, m.type, m.i, save_dir=visualize)
-
- if profile:
- logger.info('%.1fms total' % sum(dt))
- return x
-
- def _descale_pred(self, p, flips, scale, img_size):
- # de-scale predictions following augmented inference (inverse operation)
- if self.inplace:
- p[..., :4] /= scale # de-scale
- if flips == 2:
- p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
- elif flips == 3:
- p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
- else:
- x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
- if flips == 2:
- y = img_size[0] - y # de-flip ud
- elif flips == 3:
- x = img_size[1] - x # de-flip lr
- p = torch.cat((x, y, wh, p[..., 4:]), -1)
- return p
-
- def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _print_biases(self):
- m = self.model[-1] # Detect() module
- for mi in m.m: # from
- b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
- logger.info(
- ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
-
- # def _print_weights(self):
- # for m in self.model.modules():
- # if type(m) is Bottleneck:
- # logger.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
-
- def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
- logger.info('Fusing layers... ')
- for m in self.model.modules():
- if type(m) is Conv and hasattr(m, 'bn'):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, 'bn') # remove batchnorm
- m.forward = m.fuseforward # update forward
- self.info()
- return self
-
- def nms(self, mode=True): # add or remove NMS module
- present = type(self.model[-1]) is NMS # last layer is NMS
- if mode and not present:
- logger.info('Adding NMS... ')
- m = NMS() # module
- m.f = -1 # from
- m.i = self.model[-1].i + 1 # index
- self.model.add_module(name='%s' % m.i, module=m) # add
- self.eval()
- elif not mode and present:
- logger.info('Removing NMS... ')
- self.model = self.model[:-1] # remove
- return self
-
- def autoshape(self): # add AutoShape module
- logger.info('Adding AutoShape... ')
- m = AutoShape(self) # wrap model
- copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
- return m
-
- def info(self, verbose=False, img_size=640): # print model information
- model_info(self, verbose, img_size)
-
-
-def parse_model(d, ch): # model_dict, input_channels(3)
- logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
- anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- try:
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
- except:
- pass
-
- n = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP,
- C3, C3TR]:
- c1, c2 = ch[f], args[0]
- if c2 != no: # if not output
- c2 = make_divisible(c2 * gw, 8)
-
- args = [c1, c2, *args[1:]]
- if m in [BottleneckCSP, C3, C3TR]:
- args.insert(2, n) # number of repeats
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum([ch[x] for x in f])
- elif m is Detect:
- args.append([ch[x] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- elif m is Contract:
- c2 = ch[f] * args[0] ** 2
- elif m is Expand:
- c2 = ch[f] // args[0] ** 2
- else:
- c2 = ch[f]
-
- m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum([x.numel() for x in m_.parameters()]) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- ch.append(c2)
- return nn.Sequential(*layers), sorted(save)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- opt = parser.parse_args()
- opt.cfg = check_file(opt.cfg) # check file
- set_logging()
- device = select_device(opt.device)
-
- # Create model
- model = Model(opt.cfg).to(device)
- model.train()
-
- # Profile
- # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 320, 320).to(device)
- # y = model(img, profile=True)
-
- # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898)
- # from torch.utils.tensorboard import SummaryWriter
- # tb_writer = SummaryWriter('.')
- # logger.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
- # tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph
- # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard
diff --git a/cv/detection/yolov5/pytorch/models/yolov5l.yaml b/cv/detection/yolov5/pytorch/models/yolov5l.yaml
deleted file mode 100644
index 0c130c1514af8fbba9ac35ccf7ace38eeaf1d491..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/yolov5l.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Focus, [64, 3]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 9, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 1, SPP, [1024, [5, 9, 13]]],
- [-1, 3, C3, [1024, False]], # 9
- ]
-
-# YOLOv5 head
-head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, C3, [256, False]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
-
- [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/yolov5m.yaml b/cv/detection/yolov5/pytorch/models/yolov5m.yaml
deleted file mode 100644
index e477b3433d397dfea3a52322ad0dfb64e15b1ee1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/yolov5m.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 0.67 # model depth multiple
-width_multiple: 0.75 # layer channel multiple
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Focus, [64, 3]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 9, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 1, SPP, [1024, [5, 9, 13]]],
- [-1, 3, C3, [1024, False]], # 9
- ]
-
-# YOLOv5 head
-head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, C3, [256, False]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
-
- [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/yolov5s.yaml b/cv/detection/yolov5/pytorch/models/yolov5s.yaml
deleted file mode 100644
index e85442dc9188421c47fb4e7cb6f727174a141da2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/yolov5s.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 0.33 # model depth multiple
-width_multiple: 0.50 # layer channel multiple
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Focus, [64, 3]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 9, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 1, SPP, [1024, [5, 9, 13]]],
- [-1, 3, C3, [1024, False]], # 9
- ]
-
-# YOLOv5 head
-head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, C3, [256, False]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
-
- [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/models/yolov5x.yaml b/cv/detection/yolov5/pytorch/models/yolov5x.yaml
deleted file mode 100644
index c7ca03589ab8ea1bfee5efbe3c7b80b46ed9f8b0..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/models/yolov5x.yaml
+++ /dev/null
@@ -1,46 +0,0 @@
-# Parameters
-nc: 80 # number of classes
-depth_multiple: 1.33 # model depth multiple
-width_multiple: 1.25 # layer channel multiple
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# YOLOv5 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Focus, [64, 3]], # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
- [-1, 3, C3, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
- [-1, 9, C3, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
- [-1, 9, C3, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
- [-1, 1, SPP, [1024, [5, 9, 13]]],
- [-1, 3, C3, [1024, False]], # 9
- ]
-
-# YOLOv5 head
-head:
- [[-1, 1, Conv, [512, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P4
- [-1, 3, C3, [512, False]], # 13
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 4], 1, Concat, [1]], # cat backbone P3
- [-1, 3, C3, [256, False]], # 17 (P3/8-small)
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 14], 1, Concat, [1]], # cat head P4
- [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 10], 1, Concat, [1]], # cat head P5
- [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
-
- [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov5/pytorch/requirements.txt b/cv/detection/yolov5/pytorch/requirements.txt
deleted file mode 100644
index b1b9e1951fb92163109e154954119d4d4f1a5c1e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/requirements.txt
+++ /dev/null
@@ -1,27 +0,0 @@
-# pip install -r requirements_bi.txt
-
-# base ----------------------------------------
-matplotlib>=3.2.2
-numpy>=1.19.5
-Pillow
-scipy>=1.4.1
-tqdm>=4.41.0
-
-# logging -------------------------------------
-tensorboard>=2.4.1
-# wandb
-
-# plotting ------------------------------------
-seaborn>=0.11.0
-pandas
-
-# export --------------------------------------
-# coremltools>=4.1
-# onnx>=1.9.0
-# scikit-learn==0.19.2 # for coreml quantization
-
-# extras --------------------------------------
-# Cython # for pycocotools https://github.com/cocodataset/cocoapi/issues/172
-# pycocotools>=2.0 # COCO mAP
-# albumentations>=1.0.0
-thop # FLOPs computation
diff --git a/cv/detection/yolov5/pytorch/run.sh b/cv/detection/yolov5/pytorch/run.sh
deleted file mode 100644
index ed5cda7d4b22ed2e4e09c1e0cd43208694c31cc6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/run.sh
+++ /dev/null
@@ -1,38 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-start_time=$(date +%s)
-unset no_proxy use_proxy https_proxy http_proxy
-EXIT_STATUS=0
-check_status() {
- if ((${PIPESTATUS[0]} != 0)); then
- EXIT_STATUS=1
- fi
-}
-
-python3 -m torch.distributed.launch --nproc_per_node=16 \
- train.py --batch-size 32 \
- --data ./data/coco.yaml --weights "" \
- --cfg models/yolov5m.yaml --workers 16 \
- --epochs 3 --linear-lr "$@"
-check_status
-
-wait
-
-end_time=$(date +%s)
-e2e_time=$(($end_time - $start_time))
-echo "end to end time: $e2e_time" >>total_time.log
-exit ${EXIT_STATUS}
diff --git a/cv/detection/yolov5/pytorch/run_dist_training.sh b/cv/detection/yolov5/pytorch/run_dist_training.sh
deleted file mode 100644
index bb7766a2dd432c5de95db27696588b5517430767..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/run_dist_training.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-python3 -m torch.distributed.launch --nproc_per_node 8 train.py --data ./data/coco.yaml --batch-size 128 --cfg ./models/yolov5s.yaml --device 0,1,2,3,4,5,6,7
diff --git a/cv/detection/yolov5/pytorch/run_inference.sh b/cv/detection/yolov5/pytorch/run_inference.sh
deleted file mode 100644
index 3f4226fbc9ee681d9f1d0ea0fc1207d5a16beabb..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/run_inference.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-EXIT_STATUS=0
-check_status() {
- if ((${PIPESTATUS[0]} != 0)); then
- EXIT_STATUS=1
- fi
-}
-
-python3 test.py --task val --data data/coco128.yaml --weights weights/yolov5s.pt 2>&1 | tee inferencelog.log
-check_status
-exit ${EXIT_STATUS}
diff --git a/cv/detection/yolov5/pytorch/run_training.sh b/cv/detection/yolov5/pytorch/run_training.sh
deleted file mode 100644
index 8b92c09c9267ad4d5eda93deb11055a5c64c6d74..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/run_training.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2022, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-python3 train.py --data ./data/coco.yaml --batch-size 32 --cfg ./models/yolov5s.yaml
diff --git a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_amp_torch.sh b/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_amp_torch.sh
deleted file mode 100644
index b84888011df7f1ee71b7b90a6014655560baea16..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_amp_torch.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-export PYTORCH_DISABLE_VEC_KERNEL=1
-export PT_USE_CUDNN_BATCHNORM_SPATIAL_PERSISTENT=1
-cd ..
-bash run_training.sh --data ./data/coco128.yaml --amp "$@"
-cd -
diff --git a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_dist_torch.sh b/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_dist_torch.sh
deleted file mode 100644
index 0a37d6c7a3d5650cb5cd1f8d575d2cf60f029cd5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_dist_torch.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-cd ..
-bash run_dist_training.sh --data ./data/coco128.yaml "$@"
-cd -
diff --git a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_torch.sh b/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_torch.sh
deleted file mode 100644
index c206974245492b3568c5eb85f0ded2bf0f809236..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco128_torch.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-cd ..
-bash run_training.sh --data ./data/coco128.yaml "$@"
-cd -
diff --git a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_amp_torch.sh b/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_amp_torch.sh
deleted file mode 100644
index fe09a71af5a7f123c4fff99f2aa8892c50dff3f9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_amp_torch.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-export PYTORCH_DISABLE_VEC_KERNEL=1
-export PT_USE_CUDNN_BATCHNORM_SPATIAL_PERSISTENT=1
-cd ..
-bash run_training.sh --data ./data/coco.yaml --amp "$@"
-cd -
diff --git a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_dist_torch.sh b/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_dist_torch.sh
deleted file mode 100644
index 3434fee60cc38489585c7b96c785980f149701fb..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_dist_torch.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-cd ..
-bash run_dist_training.sh --data ./data/coco.yaml "$@"
diff --git a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_torch.sh b/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_torch.sh
deleted file mode 100644
index 94da449ae65fb8dbceccd0a0234685def4ba53ed..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/start_scripts/train_yolov5s_coco_torch.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-cd ..
-bash run_training.sh --data ./data/coco.yaml "$@"
-cd -
\ No newline at end of file
diff --git a/cv/detection/yolov5/pytorch/test.py b/cv/detection/yolov5/pytorch/test.py
deleted file mode 100644
index 643dc441e5215bb89075f6cc6e34e803b81af793..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/test.py
+++ /dev/null
@@ -1,366 +0,0 @@
-"""Test a trained YOLOv5 model accuracy on a custom dataset
-
-Usage:
- $ python path/to/test.py --data coco128.yaml --weights yolov5s.pt --img 640
-"""
-
-import argparse
-import json
-import os
-import sys
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-FILE = Path(__file__).absolute()
-sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
-
-from models.experimental import attempt_load
-from utils.datasets import create_dataloader
-from utils.general import coco80_to_coco91_class, check_dataset, check_file, check_img_size, check_requirements, \
- box_iou, non_max_suppression, scale_coords, xyxy2xywh, xywh2xyxy, set_logging, increment_path, colorstr
-from utils.metrics import ap_per_class, ConfusionMatrix
-from utils.plots import plot_images, output_to_target, plot_study_txt
-from utils.torch_utils import select_device, time_synchronized
-
-
-@torch.no_grad()
-def run(data,
- weights=None, # model.pt path(s)
- batch_size=32, # batch size
- imgsz=640, # inference size (pixels)
- conf_thres=0.001, # confidence threshold
- iou_thres=0.6, # NMS IoU threshold
- task='val', # train, val, test, speed or study
- device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
- single_cls=False, # treat as single-class dataset
- augment=False, # augmented inference
- verbose=False, # verbose output
- save_txt=False, # save results to *.txt
- save_hybrid=False, # save label+prediction hybrid results to *.txt
- save_conf=False, # save confidences in --save-txt labels
- save_json=False, # save a cocoapi-compatible JSON results file
- project='runs/test', # save to project/name
- name='exp', # save to project/name
- exist_ok=False, # existing project/name ok, do not increment
- half=True, # use FP16 half-precision inference
- model=None,
- dataloader=None,
- save_dir=Path(''),
- plots=True,
- wandb_logger=None,
- compute_loss=None,
- ):
- # Initialize/load model and set device
- training = model is not None
- if training: # called by train.py
- device = next(model.parameters()).device # get model device
-
- else: # called directly
- device = select_device(device, batch_size=batch_size)
-
- # Directories
- save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Load model
- model = attempt_load(weights, map_location=device) # load FP32 model
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- imgsz = check_img_size(imgsz, s=gs) # check image size
-
- # Multi-GPU disabled, incompatible with .half() https://github.com/ultralytics/yolov5/issues/99
- # if device.type != 'cpu' and torch.cuda.device_count() > 1:
- # model = nn.DataParallel(model)
-
- # Data
- with open(data) as f:
- data = yaml.safe_load(f)
- check_dataset(data) # check
-
- # Half
- half &= device.type != 'cpu' # half precision only supported on CUDA
- if half:
- model.half()
-
- # Configure
- model.eval()
- is_coco = type(data['val']) is str and data['val'].endswith('coco/val2017.txt') # COCO dataset
- nc = 1 if single_cls else int(data['nc']) # number of classes
- iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95
- niou = iouv.numel()
-
- # Logging
- log_imgs = 0
- if wandb_logger and wandb_logger.wandb:
- log_imgs = min(wandb_logger.log_imgs, 100)
- # Dataloader
- if not training:
- if device.type != 'cpu':
- model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
- task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images
- dataloader = create_dataloader(data[task], imgsz, batch_size, gs, single_cls, pad=0.5, rect=True,
- prefix=colorstr(f'{task}: '))[0]
-
- seen = 0
- confusion_matrix = ConfusionMatrix(nc=nc)
- names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)}
- coco91class = coco80_to_coco91_class()
- s = ('%20s' + '%11s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
- p, r, f1, mp, mr, map50, map, t0, t1, t2 = 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.
- loss = torch.zeros(3, device=device)
- jdict, stats, ap, ap_class, wandb_images = [], [], [], [], []
- for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)):
- t_ = time_synchronized()
- img = img.to(device, non_blocking=True)
- img = img.half() if half else img.float() # uint8 to fp16/32
- img /= 255.0 # 0 - 255 to 0.0 - 1.0
- targets = targets.to(device)
- nb, _, height, width = img.shape # batch size, channels, height, width
- t = time_synchronized()
- t0 += t - t_
-
- # Run model
- out, train_out = model(img, augment=augment) # inference and training outputs
- t1 += time_synchronized() - t
-
- # Compute loss
- if compute_loss:
- loss += compute_loss([x.float() for x in train_out], targets)[1][:3] # box, obj, cls
-
- # Run NMS
- targets[:, 2:] *= torch.Tensor([width, height, width, height]).to(device) # to pixels
- lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling
- t = time_synchronized()
- out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls)
- t2 += time_synchronized() - t
-
- # Statistics per image
- for si, pred in enumerate(out):
- labels = targets[targets[:, 0] == si, 1:]
- nl = len(labels)
- tcls = labels[:, 0].tolist() if nl else [] # target class
- path = Path(paths[si])
- seen += 1
-
- if len(pred) == 0:
- if nl:
- stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
- continue
-
- # Predictions
- if single_cls:
- pred[:, 5] = 0
- predn = pred.clone()
- scale_coords(img[si].shape[1:], predn[:, :4], shapes[si][0], shapes[si][1]) # native-space pred
-
- # Append to text file
- if save_txt:
- gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]] # normalization gain whwh
- for *xyxy, conf, cls in predn.tolist():
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
- with open(save_dir / 'labels' / (path.stem + '.txt'), 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- # W&B logging - Media Panel plots
- if len(wandb_images) < log_imgs and wandb_logger.current_epoch > 0: # Check for test operation
- if wandb_logger.current_epoch % wandb_logger.bbox_interval == 0:
- box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": "%s %.3f" % (names[cls], conf),
- "scores": {"class_score": conf},
- "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- wandb_images.append(wandb_logger.wandb.Image(img[si], boxes=boxes, caption=path.name))
- wandb_logger.log_training_progress(predn, path, names) if wandb_logger and wandb_logger.wandb_run else None
-
- # Append to pycocotools JSON dictionary
- if save_json:
- # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ...
- image_id = int(path.stem) if path.stem.isnumeric() else path.stem
- box = xyxy2xywh(predn[:, :4]) # xywh
- box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
- for p, b in zip(pred.tolist(), box.tolist()):
- jdict.append({'image_id': image_id,
- 'category_id': coco91class[int(p[5])] if is_coco else int(p[5]),
- 'bbox': [round(x, 3) for x in b],
- 'score': round(p[4], 5)})
-
- # Assign all predictions as incorrect
- correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device)
- if nl:
- detected = [] # target indices
- tcls_tensor = labels[:, 0]
-
- # target boxes
- tbox = xywh2xyxy(labels[:, 1:5])
- scale_coords(img[si].shape[1:], tbox, shapes[si][0], shapes[si][1]) # native-space labels
- if plots:
- confusion_matrix.process_batch(predn, torch.cat((labels[:, 0:1], tbox), 1))
-
- # Per target class
- for cls in torch.unique(tcls_tensor):
- ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1) # target indices
- pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1) # prediction indices
-
- # Search for detections
- if pi.shape[0]:
- # Prediction to target ious
- ious, i = box_iou(predn[pi, :4], tbox[ti]).max(1) # best ious, indices
-
- # Append detections
- detected_set = set()
- for j in (ious > iouv[0]).nonzero(as_tuple=False):
- d = ti[i[j]] # detected target
- if d.item() not in detected_set:
- detected_set.add(d.item())
- detected.append(d)
- correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn
- if len(detected) == nl: # all targets already located in image
- break
-
- # Append statistics (correct, conf, pcls, tcls)
- stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))
-
- # Plot images
- if plots and batch_i < 3:
- f = save_dir / f'test_batch{batch_i}_labels.jpg' # labels
- Thread(target=plot_images, args=(img, targets, paths, f, names), daemon=True).start()
- f = save_dir / f'test_batch{batch_i}_pred.jpg' # predictions
- Thread(target=plot_images, args=(img, output_to_target(out), paths, f, names), daemon=True).start()
-
- # Compute statistics
- stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
- if len(stats) and stats[0].any():
- p, r, ap, f1, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)
- ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
- mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
- nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class
- else:
- nt = torch.zeros(1)
-
- # Print results
- pf = '%20s' + '%11i' * 2 + '%11.3g' * 4 # print format
- print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))
-
- # Print results per class
- if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats):
- for i, c in enumerate(ap_class):
- print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))
-
- # Print speeds
- t = tuple(x / seen * 1E3 for x in (t0, t1, t2)) # speeds per image
- if not training:
- shape = (batch_size, 3, imgsz, imgsz)
- print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t)
-
- # Plots
- if plots:
- confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
- if wandb_logger and wandb_logger.wandb:
- val_batches = [wandb_logger.wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('test*.jpg'))]
- wandb_logger.log({"Validation": val_batches})
- if wandb_images:
- wandb_logger.log({"Bounding Box Debugger/Images": wandb_images})
-
- # Save JSON
- if save_json and len(jdict):
- w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights
- anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json
- pred_json = str(save_dir / f"{w}_predictions.json") # predictions json
- print('\nEvaluating pycocotools mAP... saving %s...' % pred_json)
- with open(pred_json, 'w') as f:
- json.dump(jdict, f)
-
- try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
- check_requirements(['pycocotools'])
- from pycocotools.coco import COCO
- from pycocotools.cocoeval import COCOeval
-
- anno = COCO(anno_json) # init annotations api
- pred = anno.loadRes(pred_json) # init predictions api
- eval = COCOeval(anno, pred, 'bbox')
- if is_coco:
- eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] # image IDs to evaluate
- eval.evaluate()
- eval.accumulate()
- eval.summarize()
- map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5)
- except Exception as e:
- print(f'pycocotools unable to run: {e}')
-
- # Return results
- model.float() # for training
- if not training:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- print(f"Results saved to {save_dir}{s}")
- maps = np.zeros(nc) + map
- for i, c in enumerate(ap_class):
- maps[c] = ap[i]
- return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t
-
-
-def parse_opt():
- parser = argparse.ArgumentParser(prog='test.py')
- parser.add_argument('--data', type=str, default='data/coco128.yaml', help='dataset.yaml path')
- parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
- parser.add_argument('--batch-size', type=int, default=32, help='batch size')
- parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')
- parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold')
- parser.add_argument('--task', default='val', help='train, val, test, speed or study')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--verbose', action='store_true', help='report mAP by class')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
- parser.add_argument('--project', default='runs/test', help='save to project/name')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
- opt = parser.parse_args()
- opt.save_json |= opt.data.endswith('coco.yaml')
- opt.save_txt |= opt.save_hybrid
- opt.data = check_file(opt.data) # check file
- return opt
-
-
-def main(opt):
- set_logging()
- print(colorstr('test: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
- check_requirements(exclude=('tensorboard', 'thop'))
-
- if opt.task in ('train', 'val', 'test'): # run normally
- run(**vars(opt))
-
- elif opt.task == 'speed': # speed benchmarks
- for w in opt.weights if isinstance(opt.weights, list) else [opt.weights]:
- run(opt.data, weights=w, batch_size=opt.batch_size, imgsz=opt.imgsz, conf_thres=.25, iou_thres=.45,
- save_json=False, plots=False)
-
- elif opt.task == 'study': # run over a range of settings and save/plot
- # python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s.pt yolov5m.pt yolov5l.pt yolov5x.pt
- x = list(range(256, 1536 + 128, 128)) # x axis (image sizes)
- for w in opt.weights if isinstance(opt.weights, list) else [opt.weights]:
- f = f'study_{Path(opt.data).stem}_{Path(w).stem}.txt' # filename to save to
- y = [] # y axis
- for i in x: # img-size
- print(f'\nRunning {f} point {i}...')
- r, _, t = run(opt.data, weights=w, batch_size=opt.batch_size, imgsz=i, conf_thres=opt.conf_thres,
- iou_thres=opt.iou_thres, save_json=opt.save_json, plots=False)
- y.append(r + t) # results and times
- np.savetxt(f, y, fmt='%10.4g') # save
- os.system('zip -r study.zip study_*.txt')
- plot_study_txt(x=x) # plot
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/cv/detection/yolov5/pytorch/train.py b/cv/detection/yolov5/pytorch/train.py
deleted file mode 100644
index f8f0f95a22846e7f2f76f649f68e88357cf30bfb..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/train.py
+++ /dev/null
@@ -1,842 +0,0 @@
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-"""Train a YOLOv5 model on a custom dataset
-
-Usage:
- $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640
-"""
-
-import argparse
-import logging
-import os
-import random
-import sys
-import time
-import warnings
-from copy import deepcopy
-from pathlib import Path
-from threading import Thread
-import traceback
-
-import torch
-
-try:
- from torch.utils.tensorboard import SummaryWriter
-except:
- class SummaryWriter(object):
- def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
- flush_secs=120, filename_suffix=''):
- if not log_dir:
- import socket
- from datetime import datetime
- current_time = datetime.now().strftime('%b%d_%H-%M-%S')
- log_dir = os.path.join(
- 'runs', current_time + '_' + socket.gethostname() + comment)
- self.log_dir = log_dir
- self.purge_step = purge_step
- self.max_queue = max_queue
- self.flush_secs = flush_secs
- self.filename_suffix = filename_suffix
-
- # Initialize the file writers, but they can be cleared out on close
- # and recreated later as needed.
- self.file_writer = self.all_writers = None
- self._get_file_writer()
-
- # Create default bins for histograms, see generate_testdata.py in tensorflow/tensorboard
- v = 1E-12
- buckets = []
- neg_buckets = []
- while v < 1E20:
- buckets.append(v)
- neg_buckets.append(-v)
- v *= 1.1
- self.default_bins = neg_buckets[::-1] + [0] + buckets
-
- def _check_caffe2_blob(self, item): pass
-
- def _get_file_writer(self): pass
-
- def get_logdir(self):
- """Returns the directory where event files will be written."""
- return self.log_dir
-
- def add_hparams(self, hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None): pass
-
- def add_scalar(self, tag, scalar_value, global_step=None, walltime=None, new_style=False): pass
-
- def add_scalars(self, main_tag, tag_scalar_dict, global_step=None, walltime=None): pass
-
- def add_histogram(self, tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None): pass
-
- def add_histogram_raw(self, tag, min, max, num, sum, sum_squares, bucket_limits, bucket_counts, global_step=None, walltime=None): pass
-
- def add_image(self, tag, img_tensor, global_step=None, walltime=None, dataformats='CHW'): pass
-
- def add_images(self, tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW'): pass
-
- def add_image_with_boxes(self, tag, img_tensor, box_tensor, global_step=None, walltime=None, rescale=1, dataformats='CHW', labels=None): pass
-
- def add_figure(self, tag, figure, global_step=None, close=True, walltime=None): pass
-
- def add_video(self, tag, vid_tensor, global_step=None, fps=4, walltime=None): pass
-
- def add_audio(self, tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None): pass
-
- def add_text(self, tag, text_string, global_step=None, walltime=None): pass
-
- def add_onnx_graph(self, prototxt): pass
-
- def add_graph(self, model, input_to_model=None, verbose=False): pass
-
- @staticmethod
- def _encode(rawstr): pass
-
- def add_embedding(self, mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None): pass
-
- def add_pr_curve(self, tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_pr_curve_raw(self, tag, true_positive_counts, false_positive_counts, true_negative_counts, false_negative_counts, precision, recall, global_step=None, num_thresholds=127, weights=None, walltime=None): pass
-
- def add_custom_scalars_multilinechart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars_marginchart(self, tags, category='default', title='untitled'): pass
-
- def add_custom_scalars(self, layout): pass
-
- def add_mesh(self, tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None): pass
-
- def flush(self): pass
-
- def close(self): pass
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.close()
-
-
-import math
-import numpy as np
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-import torch.optim.lr_scheduler as lr_scheduler
-import torch.utils.data
-import yaml
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from tqdm import tqdm
-
-FILE = Path(__file__).absolute()
-sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
-
-import test # for end-of-epoch mAP
-from models.experimental import attempt_load
-from models.yolo import Model
-from utils.autoanchor import check_anchors
-from utils.datasets import create_dataloader
-from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \
- strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \
- check_requirements, print_mutation, set_logging, one_cycle, colorstr
-from utils.google_utils import attempt_download
-from utils.loss import ComputeLoss
-from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
-from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, de_parallel
-from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume
-from utils.metrics import fitness
-
-logger = logging.getLogger(__name__)
-LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
-RANK = int(os.getenv('RANK', -1))
-WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
-
-
-def train(hyp, # path/to/hyp.yaml or hyp dictionary
- opt,
- device,
- ):
- save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, notest, nosave, workers, = \
- opt.save_dir, opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
- opt.resume, opt.notest, opt.nosave, opt.workers
-
- # Directories
- save_dir = Path(save_dir)
- wdir = save_dir / 'weights'
- wdir.mkdir(parents=True, exist_ok=True) # make dir
- last = wdir / 'last.pt'
- best = wdir / 'best.pt'
- results_file = save_dir / 'results.txt'
-
- # Hyperparameters
- if isinstance(hyp, str):
- with open(hyp) as f:
- hyp = yaml.safe_load(f) # load hyps dict
- logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
-
- # Save run settings
- with open(save_dir / 'hyp.yaml', 'w') as f:
- yaml.safe_dump(hyp, f, sort_keys=False)
- with open(save_dir / 'opt.yaml', 'w') as f:
- yaml.safe_dump(vars(opt), f, sort_keys=False)
-
- # Configure
- plots = not evolve # create plots
- cuda = device.type != 'cpu'
- init_seeds(1)
- with open(data) as f:
- data_dict = yaml.safe_load(f) # data dict
-
- # Loggers
- loggers = {'wandb': None, 'tb': None} # loggers dict
- if RANK in [-1, 0]:
- # TensorBoard
- if not evolve:
- prefix = colorstr('tensorboard: ')
- logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/")
- loggers['tb'] = SummaryWriter(str(save_dir))
-
- # W&B
- opt.hyp = hyp # add hyperparameters
- run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
- run_id = run_id if opt.resume else None # start fresh run if transfer learning
- wandb_logger = WandbLogger(opt, save_dir.stem, run_id, data_dict)
- loggers['wandb'] = wandb_logger.wandb
- if loggers['wandb']:
- data_dict = wandb_logger.data_dict
- weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # may update weights, epochs if resuming
-
- nc = 1 if single_cls else int(data_dict['nc']) # number of classes
- names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
- assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, data) # check
- is_coco = data.endswith('coco.yaml') and nc == 80 # COCO dataset
-
- # Model
- pretrained = weights.endswith('.pt')
- if pretrained:
- with torch_distributed_zero_first(RANK):
- weights = attempt_download(weights) # download if not found locally
- ckpt = torch.load(weights, map_location=device) # load checkpoint
- model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys
- state_dict = ckpt['model'].float().state_dict() # to FP32
- state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
- model.load_state_dict(state_dict, strict=False) # load
- logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
- else:
- model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- with torch_distributed_zero_first(RANK):
- check_dataset(data_dict) # check
- train_path = data_dict['train']
- test_path = data_dict['val']
-
- # Freeze
- freeze = [] # parameter names to freeze (full or partial)
- for k, v in model.named_parameters():
- v.requires_grad = True # train all layers
- if any(x in k for x in freeze):
- print('freezing %s' % k)
- v.requires_grad = False
-
- # Optimizer
- nbs = 64 # nominal batch size
- accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing
- hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay
- logger.info(f"Scaled weight_decay = {hyp['weight_decay']}")
-
- pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
- for k, v in model.named_modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- pg2.append(v.bias) # biases
- if isinstance(v, nn.BatchNorm2d):
- pg0.append(v.weight) # no decay
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- pg1.append(v.weight) # apply decay
-
- if opt.adam:
- optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
- else:
- optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
-
- optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
- optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
- logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
- del pg0, pg1, pg2
-
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
- if opt.linear_lr:
- lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
- else:
- lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
- # EMA
- ema = ModelEMA(model) if RANK in [-1, 0] else None
-
- # Resume
- start_epoch, best_fitness = 0, 0.0
- if pretrained:
- # Optimizer
- if ckpt['optimizer'] is not None:
- optimizer.load_state_dict(ckpt['optimizer'])
- best_fitness = ckpt['best_fitness']
-
- # EMA
- if ema and ckpt.get('ema'):
- ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
- ema.updates = ckpt['updates']
-
- # Results
- if ckpt.get('training_results') is not None:
- results_file.write_text(ckpt['training_results']) # write results.txt
-
- # Epochs
- start_epoch = ckpt['epoch'] + 1
- if resume:
- assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
- if epochs < start_epoch:
- logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
- (weights, ckpt['epoch'], epochs))
- epochs += ckpt['epoch'] # finetune additional epochs
-
- del ckpt, state_dict
-
- # Image sizes
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
- imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
-
- # DP mode
- # if cuda and RANK == -1 and torch.cuda.device_count() > 1:
- # logging.warning('DP not recommended, instead use torch.distributed.run for best DDP Multi-GPU results.\n'
- # 'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')
- # model = torch.nn.DataParallel(model)
-
- # SyncBatchNorm
- if opt.sync_bn and cuda and RANK != -1:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
- logger.info('Using SyncBatchNorm()')
-
- # Trainloader
- dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, single_cls,
- hyp=hyp, augment=False, cache=opt.cache_images, rect=opt.rect, rank=RANK,
- workers=workers,
- image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
- mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
- nb = len(dataloader) # number of batches
- assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, data, nc - 1)
-
- # Process 0
- if RANK in [-1, 0]:
- testloader = create_dataloader(test_path, imgsz_test, batch_size, gs, single_cls,
- hyp=hyp, cache=opt.cache_images and not notest, rect=True, rank=-1,
- workers=workers,
- pad=0.5, prefix=colorstr('val: '))[0]
-
- if not resume:
- labels = np.concatenate(dataset.labels, 0)
- c = torch.tensor(labels[:, 0]) # classes
- # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
- # model._initialize_biases(cf.to(device))
- if plots:
- plot_labels(labels, names, save_dir, loggers)
- if loggers['tb']:
- loggers['tb'].add_histogram('classes', c, 0) # TensorBoard
-
- # Anchors
- if not opt.noautoanchor:
- check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
- if opt.amp:
- model.half().float() # pre-reduce anchor precision
- else:
- model.float()
-
- # DDP mode
- if cuda and RANK != -1:
- model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
-
- # Model parameters
- hyp['box'] *= 3. / nl # scale to layers
- hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers
- hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers
- hyp['label_smoothing'] = opt.label_smoothing
- model.nc = nc # attach number of classes to model
- model.hyp = hyp # attach hyperparameters to model
- model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou)
- model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
- model.names = names
-
- # Start training
- t0 = time.time()
- nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
- # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
- last_opt_step = -1
- maps = np.zeros(nc) # mAP per class
- results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- scheduler.last_epoch = start_epoch - 1 # do not move
- scaler = amp.GradScaler(enabled=cuda)
- compute_loss = ComputeLoss(model) # init loss class
- logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n'
- f'Using {dataloader.num_workers} dataloader workers\n'
- f'Logging results to {save_dir}\n'
- f'Starting training for {epochs} epochs...')
- for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
- model.train()
-
- # Update image weights (optional)
- if opt.image_weights:
- # Generate indices
- if RANK in [-1, 0]:
- cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
- iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
- dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
- # Broadcast if DDP
- if RANK != -1:
- indices = (torch.tensor(dataset.indices) if RANK == 0 else torch.zeros(dataset.n)).int()
- dist.broadcast(indices, 0)
- if RANK != 0:
- dataset.indices = indices.cpu().numpy()
-
- # Update mosaic border
- # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
- # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
-
- mloss = torch.zeros(4, device=device) # mean losses
- if RANK != -1:
- dataloader.sampler.set_epoch(epoch)
- pbar = enumerate(dataloader)
- logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'img_size', "total_fps"))
- if RANK in [-1, 0]:
- pbar = tqdm(pbar, total=nb) # progress bar
- optimizer.zero_grad()
- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
- step_start_time = time.time()
-
- ni = i + nb * epoch # number integrated batches (since train start)
- imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
-
- # Warmup
- if ni <= nw:
- xi = [0, nw] # x interp
- # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
- accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())
- for j, x in enumerate(optimizer.param_groups):
- # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
- x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
- if 'momentum' in x:
- x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
-
- # Multi-scale
- if opt.multi_scale:
- sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
- sf = sz / max(imgs.shape[2:]) # scale factor
- if sf != 1:
- ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
- imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
-
- # Forward
- if opt.amp:
- with amp.autocast(enabled=cuda):
- pred = model(imgs) # forward
- loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
- if RANK != -1:
- loss *= WORLD_SIZE # gradient averaged between devices in DDP mode
- if opt.quad:
- loss *= 4.
- else:
- pred = model(imgs) # forward
- loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
- if RANK != -1:
- loss *= WORLD_SIZE # gradient averaged between devices in DDP mode
- if opt.quad:
- loss *= 4.
-
- if not math.isfinite(loss[0]):
- print("Loss is {}, stopping training".format(loss[0]))
- sys.exit(1)
-
- # Backward
- if opt.amp:
- scaler.scale(loss).backward()
- else:
- loss.backward()
- torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.)
-
- # Optimize
- if ni - last_opt_step >= accumulate:
- if opt.amp:
- scaler.unscale_(optimizer)
- torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.)
- scaler.step(optimizer) # optimizer.step
- scaler.update()
- else:
- optimizer.step()
- optimizer.zero_grad()
- if ema:
- ema.update(model)
- last_opt_step = ni
-
- step_end_time = time.time()
- fps = len(imgs) / (step_end_time - step_start_time)
- if torch.distributed.is_initialized():
- fps = fps * torch.distributed.get_world_size()
-
- # Print
- if RANK in [-1, 0]:
- mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
- mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
- s = ('%10s' * 2 + '%10.4g' * 6) % (
- f'{epoch}/{epochs - 1}', mem, *mloss, imgs.shape[-1], fps)
- pbar.set_description(s)
-
- if nb > 1000:
- log_freq = 100
- else:
- log_freq = 20
- if "USE_DLTEST" in os.environ and i % log_freq == 0:
- print(".")
-
- # Plot
- if plots and ni < 3:
- f = save_dir / f'train_batch{ni}.jpg' # filename
- Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()
- if loggers['tb'] and ni == 0: # TensorBoard
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress jit trace warning
- loggers['tb'].add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strict=False), [])
- elif plots and ni == 10 and loggers['wandb']:
- wandb_logger.log({'Mosaics': [loggers['wandb'].Image(str(x), caption=x.name) for x in
- save_dir.glob('train*.jpg') if x.exists()]})
-
- # end batch ------------------------------------------------------------------------------------------------
-
- # Scheduler
- lr = [x['lr'] for x in optimizer.param_groups] # for loggers
- scheduler.step()
-
- # DDP process 0 or single-GPU
- stop_train = [False]
- if RANK in [-1, 0]:
- # mAP
- try:
- ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
- final_epoch = epoch + 1 == epochs
- if not notest or final_epoch: # Calculate mAP
- wandb_logger.current_epoch = epoch + 1
- results, maps, _ = test.run(data_dict,
- batch_size=batch_size // WORLD_SIZE * 2,
- imgsz=imgsz_test,
- model=ema.ema,
- single_cls=single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- save_json=is_coco and final_epoch,
- verbose=nc < 50 and final_epoch,
- plots=plots and final_epoch,
- wandb_logger=wandb_logger,
- compute_loss=compute_loss)
- except:
- traceback.print_exc()
- stop_train[0] = True
-
- # Write
- with open(results_file, 'a') as f:
- f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss
-
- # Log
- tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss
- 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
- 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss
- 'x/lr0', 'x/lr1', 'x/lr2'] # params
- for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
- if loggers['tb']:
- loggers['tb'].add_scalar(tag, x, epoch) # TensorBoard
- if loggers['wandb']:
- wandb_logger.log({tag: x}) # W&B
-
- # Update best mAP
- fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
- if fi > best_fitness:
- best_fitness = fi
- wandb_logger.end_epoch(best_result=best_fitness == fi)
-
- # Save model
- if (not nosave) or (final_epoch and not evolve): # if save
- ckpt = {'epoch': epoch,
- 'best_fitness': best_fitness,
- 'training_results': results_file.read_text(),
- 'model': deepcopy(de_parallel(model)).half() if opt.amp else deepcopy(de_parallel(model)),
- 'ema': deepcopy(ema.ema).half() if opt.amp else deepcopy(ema.ema),
- 'updates': ema.updates,
- 'optimizer': optimizer.state_dict(),
- 'wandb_id': wandb_logger.wandb_run.id if loggers['wandb'] else None}
-
- # Save last, best and delete
- torch.save(ckpt, last)
- if best_fitness == fi:
- torch.save(ckpt, best)
- if loggers['wandb']:
- if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1:
- wandb_logger.log_model(last.parent, opt, epoch, fi, best_model=best_fitness == fi)
- del ckpt
-
- # Fix destroy process exception
- if dist.is_initialized() and dist.get_world_size() > 1:
- dist.broadcast_object_list(stop_train, src=0)
- if stop_train[0]:
- dist.destroy_process_group()
- return
-
- # end epoch ----------------------------------------------------------------------------------------------------
- # end training -----------------------------------------------------------------------------------------------------
- if RANK in [-1, 0]:
- logger.info(f'{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.\n')
- if plots:
- plot_results(save_dir=save_dir) # save as results.png
- if loggers['wandb']:
- files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]]
- wandb_logger.log({"Results": [loggers['wandb'].Image(str(save_dir / f), caption=f) for f in files
- if (save_dir / f).exists()]})
-
- if not evolve:
- if is_coco: # COCO dataset
- for m in [last, best] if best.exists() else [last]: # speed, mAP tests
- results, _, _ = test.run(data_dict,
- batch_size=batch_size // WORLD_SIZE * 2,
- imgsz=imgsz_test,
- conf_thres=0.001,
- iou_thres=0.7,
- model=attempt_load(m, device).half() if opt.amp else attempt_load(m, device),
- single_cls=single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- save_json=True,
- plots=False)
-
- # Strip optimizers
- for f in last, best:
- if f.exists():
- strip_optimizer(f) # strip optimizers
- if loggers['wandb']: # Log the stripped model
- loggers['wandb'].log_artifact(str(best if best.exists() else last), type='model',
- name='run_' + wandb_logger.wandb_run.id + '_model',
- aliases=['latest', 'best', 'stripped'])
- wandb_logger.finish_run()
-
- torch.cuda.empty_cache()
- return results
-
-
-def parse_opt(known=False):
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default='data/coco128.yaml', help='dataset.yaml path')
- parser.add_argument('--hyp', type=str, default='data/hyps/hyp.scratch.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=10)
- parser.add_argument('--batch-size', type=int, default=8, help='total batch size for all GPUs')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--notest', action='store_true', help='only test final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
- parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
- parser.add_argument('--project', default='runs/train', help='save to project/name')
- parser.add_argument('--entity', default=None, help='W&B entity')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--linear-lr', action='store_true', help='linear LR')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
- parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
- parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
- parser.add_argument('--local_rank', '--local-rank', type=int, default=-1, help='DDP parameter, do not modify')
- parser.add_argument('--amp', action='store_true', default=False, help='use amp to train and test')
- opt = parser.parse_known_args()[0] if known else parser.parse_args()
- return opt
-
-
-def main(opt):
- set_logging(RANK)
- if RANK in [-1, 0]:
- print(colorstr('train: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
- # check_git_status()
- check_requirements(exclude=['thop'])
-
- # Resume
- wandb_run = check_wandb_resume(opt)
- if opt.resume and not wandb_run: # resume an interrupted run
- ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
- assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
- with open(Path(ckpt).parent.parent / 'opt.yaml') as f:
- opt = argparse.Namespace(**yaml.safe_load(f)) # replace
- opt.cfg, opt.weights, opt.resume = '', ckpt, True # reinstate
- logger.info('Resuming training from %s' % ckpt)
- else:
- # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
- opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
- assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
- opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
- opt.name = 'evolve' if opt.evolve else opt.name
- opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))
-
- print("Global setting:", LOCAL_RANK, RANK, WORLD_SIZE)
-
- try:
- from dltest import show_training_arguments
- show_training_arguments(opt)
- except:
- pass
-
- # DDP mode
- device = select_device(opt.device, batch_size=opt.batch_size)
- if LOCAL_RANK != -1:
- from datetime import timedelta
- assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
- torch.cuda.set_device(LOCAL_RANK)
- device = torch.device('cuda', LOCAL_RANK)
-
- dist_backend = "nccl"
- DIST_BACKEND_ENV = "PT_DIST_BACKEND"
- if DIST_BACKEND_ENV in os.environ:
- print("WARN: Use the distributed backend of the environment.")
- dist_backend = os.environ[DIST_BACKEND_ENV]
-
- dist.init_process_group(backend=dist_backend, rank=RANK, world_size=WORLD_SIZE)
- # assert opt.batch_size % WORLD_SIZE == 0, '--batch-size must be multiple of CUDA device count'
- assert not opt.image_weights, '--image-weights argument is not compatible with DDP training'
-
- # Train
- if not opt.evolve:
- train(opt.hyp, opt, device)
- if WORLD_SIZE > 1 and RANK == 0:
- _ = [print('Destroying process group... ', end=''), dist.destroy_process_group(), print('Done.')]
-
- # Evolve hyperparameters (optional)
- else:
- # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
- meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
- 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
- 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
- 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
- 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
- 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
- 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
- 'box': (1, 0.02, 0.2), # box loss gain
- 'cls': (1, 0.2, 4.0), # cls loss gain
- 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
- 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
- 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
- 'iou_t': (0, 0.1, 0.7), # IoU training threshold
- 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
- 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
- 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
- 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
- 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
- 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
- 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
- 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
- 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
- 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
- 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
- 'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
- 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
- 'mosaic': (1, 0.0, 1.0), # image mixup (probability)
- 'mixup': (1, 0.0, 1.0), # image mixup (probability)
- 'copy_paste': (1, 0.0, 1.0)} # segment copy-paste (probability)
-
- with open(opt.hyp) as f:
- hyp = yaml.safe_load(f) # load hyps dict
- if 'anchors' not in hyp: # anchors commented in hyp.yaml
- hyp['anchors'] = 3
- assert LOCAL_RANK == -1, 'DDP mode not implemented for --evolve'
- opt.notest, opt.nosave = True, True # only test/save final epoch
- # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
- yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
- if opt.bucket:
- os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
-
- for _ in range(opt.evolve): # generations to evolve
- if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate
- # Select parent(s)
- parent = 'single' # parent selection method: 'single' or 'weighted'
- x = np.loadtxt('evolve.txt', ndmin=2)
- n = min(5, len(x)) # number of previous results to consider
- x = x[np.argsort(-fitness(x))][:n] # top n mutations
- w = fitness(x) - fitness(x).min() + 1E-6 # weights (sum > 0)
- if parent == 'single' or len(x) == 1:
- # x = x[random.randint(0, n - 1)] # random selection
- x = x[random.choices(range(n), weights=w)[0]] # weighted selection
- elif parent == 'weighted':
- x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
-
- # Mutate
- mp, s = 0.8, 0.2 # mutation probability, sigma
- npr = np.random
- npr.seed(int(time.time()))
- g = np.array([x[0] for x in meta.values()]) # gains 0-1
- ng = len(meta)
- v = np.ones(ng)
- while all(v == 1): # mutate until a change occurs (prevent duplicates)
- v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
- for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
- hyp[k] = float(x[i + 7] * v[i]) # mutate
-
- # Constrain to limits
- for k, v in meta.items():
- hyp[k] = max(hyp[k], v[1]) # lower limit
- hyp[k] = min(hyp[k], v[2]) # upper limit
- hyp[k] = round(hyp[k], 5) # significant digits
-
- # Train mutation
- results = train(hyp.copy(), opt, device)
-
- # Write mutation results
- print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
-
- # Plot results
- plot_evolution(yaml_file)
- print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n'
- f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}')
-
-
-def run(**kwargs):
- # Usage: import train; train.run(imgsz=320, weights='yolov5m.pt')
- opt = parse_opt(True)
- for k, v in kwargs.items():
- setattr(opt, k, v)
- main(opt)
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/cv/detection/yolov5/pytorch/utils/__init__.py b/cv/detection/yolov5/pytorch/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov5/pytorch/utils/activations.py b/cv/detection/yolov5/pytorch/utils/activations.py
deleted file mode 100644
index 92a3b5eaa54bcb46464dff900db247b0436e5046..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/activations.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Activation functions
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-# SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------
-class SiLU(nn.Module): # export-friendly version of nn.SiLU()
- @staticmethod
- def forward(x):
- return x * torch.sigmoid(x)
-
-
-class Hardswish(nn.Module): # export-friendly version of nn.Hardswish()
- @staticmethod
- def forward(x):
- # return x * F.hardsigmoid(x) # for torchscript and CoreML
- return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX
-
-
-# Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
-class Mish(nn.Module):
- @staticmethod
- def forward(x):
- return x * F.softplus(x).tanh()
-
-
-class MemoryEfficientMish(nn.Module):
- class F(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x):
- ctx.save_for_backward(x)
- return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
-
- @staticmethod
- def backward(ctx, grad_output):
- x = ctx.saved_tensors[0]
- sx = torch.sigmoid(x)
- fx = F.softplus(x).tanh()
- return grad_output * (fx + x * sx * (1 - fx * fx))
-
- def forward(self, x):
- return self.F.apply(x)
-
-
-# FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
-class FReLU(nn.Module):
- def __init__(self, c1, k=3): # ch_in, kernel
- super().__init__()
- self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
- self.bn = nn.BatchNorm2d(c1)
-
- def forward(self, x):
- return torch.max(x, self.bn(self.conv(x)))
-
-
-# ACON https://arxiv.org/pdf/2009.04759.pdf ----------------------------------------------------------------------------
-class AconC(nn.Module):
- r""" ACON activation (activate or not).
- AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter
- according to "Activate or Not: Learning Customized Activation" .
- """
-
- def __init__(self, c1):
- super().__init__()
- self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
- self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
- self.beta = nn.Parameter(torch.ones(1, c1, 1, 1))
-
- def forward(self, x):
- dpx = (self.p1 - self.p2) * x
- return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x
-
-
-class MetaAconC(nn.Module):
- r""" ACON activation (activate or not).
- MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network
- according to "Activate or Not: Learning Customized Activation" .
- """
-
- def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r
- super().__init__()
- c2 = max(r, c1 // r)
- self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
- self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
- self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True)
- self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True)
- # self.bn1 = nn.BatchNorm2d(c2)
- # self.bn2 = nn.BatchNorm2d(c1)
-
- def forward(self, x):
- y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True)
- # batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891
- # beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y))))) # bug/unstable
- beta = torch.sigmoid(self.fc2(self.fc1(y))) # bug patch BN layers removed
- dpx = (self.p1 - self.p2) * x
- return dpx * torch.sigmoid(beta * dpx) + self.p2 * x
diff --git a/cv/detection/yolov5/pytorch/utils/augmentations.py b/cv/detection/yolov5/pytorch/utils/augmentations.py
deleted file mode 100644
index 74ee4de2131ee0a15275be39a9d27b9fc5c2512d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/augmentations.py
+++ /dev/null
@@ -1,272 +0,0 @@
-# YOLOv5 image augmentation functions
-
-import logging
-import random
-
-import cv2
-import math
-import numpy as np
-
-from utils.general import colorstr, segment2box, resample_segments, check_version
-from utils.metrics import bbox_ioa
-
-
-class Albumentations:
- # YOLOv5 Albumentations class (optional, only used if package is installed)
- def __init__(self):
- self.transform = None
- try:
- import albumentations as A
- check_version(A.__version__, '1.0.0') # version requirement
-
- self.transform = A.Compose([
- A.Blur(p=0.1),
- A.MedianBlur(p=0.1),
- A.ToGray(p=0.01)],
- bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
-
- logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms))
- except ImportError: # package not installed, skip
- pass
- except Exception as e:
- logging.info(colorstr('albumentations: ') + f'{e}')
-
- def __call__(self, im, labels, p=1.0):
- if self.transform and random.random() < p:
- new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed
- im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
- return im, labels
-
-
-def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
- # HSV color-space augmentation
- if hgain or sgain or vgain:
- r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
- hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))
- dtype = im.dtype # uint8
-
- x = np.arange(0, 256, dtype=r.dtype)
- lut_hue = ((x * r[0]) % 180).astype(dtype)
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
-
- img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))
- cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed
-
-
-def hist_equalize(im, clahe=True, bgr=False):
- # Equalize histogram on BGR image 'img' with img.shape(n,m,3) and range 0-255
- yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
- if clahe:
- c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
- yuv[:, :, 0] = c.apply(yuv[:, :, 0])
- else:
- yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram
- return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB
-
-
-def replicate(im, labels):
- # Replicate labels
- h, w = im.shape[:2]
- boxes = labels[:, 1:].astype(int)
- x1, y1, x2, y2 = boxes.T
- s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
- for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
- x1b, y1b, x2b, y2b = boxes[i]
- bh, bw = y2b - y1b, x2b - x1b
- yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
- x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
- im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
-
- return im, labels
-
-
-def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
- # Resize and pad image while meeting stride-multiple constraints
- shape = im.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better test mAP)
- r = min(r, 1.0)
-
- # Compute padding
- ratio = r, r # width, height ratios
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
- elif scaleFill: # stretch
- dw, dh = 0.0, 0.0
- new_unpad = (new_shape[1], new_shape[0])
- ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
- return im, ratio, (dw, dh)
-
-
-def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
- border=(0, 0)):
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
- # targets = [cls, xyxy]
-
- height = im.shape[0] + border[0] * 2 # shape(h,w,c)
- width = im.shape[1] + border[1] * 2
-
- # Center
- C = np.eye(3)
- C[0, 2] = -im.shape[1] / 2 # x translation (pixels)
- C[1, 2] = -im.shape[0] / 2 # y translation (pixels)
-
- # Perspective
- P = np.eye(3)
- P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
- P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
- S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
- T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
- if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
- if perspective:
- im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114))
- else: # affine
- im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
-
- # Visualize
- # import matplotlib.pyplot as plt
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
- # ax[0].imshow(img[:, :, ::-1]) # base
- # ax[1].imshow(img2[:, :, ::-1]) # warped
-
- # Transform label coordinates
- n = len(targets)
- if n:
- use_segments = any(x.any() for x in segments)
- new = np.zeros((n, 4))
- if use_segments: # warp segments
- segments = resample_segments(segments) # upsample
- for i, segment in enumerate(segments):
- xy = np.ones((len(segment), 3))
- xy[:, :2] = segment
- xy = xy @ M.T # transform
- xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine
-
- # clip
- new[i] = segment2box(xy, width, height)
-
- else: # warp boxes
- xy = np.ones((n * 4, 3))
- xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
- xy = xy @ M.T # transform
- xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine
-
- # create new boxes
- x = xy[:, [0, 2, 4, 6]]
- y = xy[:, [1, 3, 5, 7]]
- new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
-
- # clip
- new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
- new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
-
- # filter candidates
- i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
- targets = targets[i]
- targets[:, 1:5] = new[i]
-
- return im, targets
-
-
-def copy_paste(im, labels, segments, probability=0.5):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- if probability and n:
- h, w, c = im.shape # height, width, channels
- im_new = np.zeros(im.shape, np.uint8)
- for j in random.sample(range(n), k=round(probability * n)):
- l, s = labels[j], segments[j]
- box = w - l[3], l[2], w - l[1], l[4]
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
- labels = np.concatenate((labels, [[l[0], *box]]), 0)
- segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
- cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
-
- result = cv2.bitwise_and(src1=im, src2=im_new)
- result = cv2.flip(result, 1) # augment segments (flip left-right)
- i = result > 0 # pixels to replace
- # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
- im[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
-
- return im, labels, segments
-
-
-def cutout(im, labels):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- h, w = im.shape[:2]
-
- # create random masks
- scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
- for s in scales:
- mask_h = random.randint(1, int(h * s))
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- # apply random color mask
- im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
-
- # return unobscured labels
- if len(labels) and s > 0.03:
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- labels = labels[ioa < 0.60] # remove >60% obscured labels
-
- return labels
-
-
-def mixup(im, labels, im2, labels2):
- # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf
- r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
- im = (im * r + im2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
- return im, labels
-
-
-def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
- # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
- ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
- return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
diff --git a/cv/detection/yolov5/pytorch/utils/autoanchor.py b/cv/detection/yolov5/pytorch/utils/autoanchor.py
deleted file mode 100644
index 87dc394c832e628a0b1c7b39aeb1e7c6584c63b3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/autoanchor.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# Auto-anchor utils
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-from utils.general import colorstr
-
-
-def check_anchor_order(m):
- # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary
- a = m.anchor_grid.prod(-1).view(-1) # anchor area
- da = a[-1] - a[0] # delta a
- ds = m.stride[-1] - m.stride[0] # delta s
- if da.sign() != ds.sign(): # same order
- print('Reversing anchor order')
- m.anchors[:] = m.anchors.flip(0)
- m.anchor_grid[:] = m.anchor_grid.flip(0)
-
-
-def check_anchors(dataset, model, thr=4.0, imgsz=640):
- # Check anchor fit to data, recompute if necessary
- prefix = colorstr('autoanchor: ')
- print(f'\n{prefix}Analyzing anchors... ', end='')
- m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
- wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
-
- def metric(k): # compute metric
- r = wh[:, None] / k[None]
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
- best = x.max(1)[0] # best_x
- aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold
- bpr = (best > 1. / thr).float().mean() # best possible recall
- return bpr, aat
-
- anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors
- bpr, aat = metric(anchors)
- print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='')
- if bpr < 0.98: # threshold to recompute
- print('. Attempting to improve anchors, please wait...')
- na = m.anchor_grid.numel() // 2 # number of anchors
- try:
- anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
- except Exception as e:
- print(f'{prefix}ERROR: {e}')
- new_bpr = metric(anchors)[0]
- if new_bpr > bpr: # replace anchors
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
- m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference
- m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
- check_anchor_order(m)
- print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
- else:
- print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.')
- print('') # newline
-
-
-def kmean_anchors(path='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
- """ Creates kmeans-evolved anchors from training dataset
-
- Arguments:
- path: path to dataset *.yaml, or a loaded dataset
- n: number of anchors
- img_size: image size used for training
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
- gen: generations to evolve anchors using genetic algorithm
- verbose: print all results
-
- Return:
- k: kmeans evolved anchors
-
- Usage:
- from utils.autoanchor import *; _ = kmean_anchors()
- """
- from scipy.cluster.vq import kmeans
-
- thr = 1. / thr
- prefix = colorstr('autoanchor: ')
-
- def metric(k, wh): # compute metrics
- r = wh[:, None] / k[None]
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
- return x, x.max(1)[0] # x, best_x
-
- def anchor_fitness(k): # mutation fitness
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
- return (best * (best > thr).float()).mean() # fitness
-
- def print_results(k):
- k = k[np.argsort(k.prod(1))] # sort small to large
- x, best = metric(k, wh0)
- bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
- print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr')
- print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, '
- f'past_thr={x[x > thr].mean():.3f}-mean: ', end='')
- for i, x in enumerate(k):
- print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg
- return k
-
- if isinstance(path, str): # *.yaml file
- with open(path) as f:
- data_dict = yaml.safe_load(f) # model dict
- from utils.datasets import LoadImagesAndLabels
- dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
- else:
- dataset = path # dataset
-
- # Get label wh
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
-
- # Filter
- i = (wh0 < 3.0).any(1).sum()
- if i:
- print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
- wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
- # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
-
- # Kmeans calculation
- print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...')
- s = wh.std(0) # sigmas for whitening
- k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
- assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}')
- k *= s
- wh = torch.tensor(wh, dtype=torch.float32) # filtered
- wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
- k = print_results(k)
-
- # Plot
- # k, d = [None] * 20, [None] * 20
- # for i in tqdm(range(1, 21)):
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
- # ax = ax.ravel()
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
- # fig.savefig('wh.png', dpi=200)
-
- # Evolve
- npr = np.random
- f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
- pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar
- for _ in pbar:
- v = np.ones(sh)
- while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
- v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
- kg = (k.copy() * v).clip(min=2.0)
- fg = anchor_fitness(kg)
- if fg > f:
- f, k = fg, kg.copy()
- pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
- if verbose:
- print_results(k)
-
- return print_results(k)
diff --git a/cv/detection/yolov5/pytorch/utils/aws/__init__.py b/cv/detection/yolov5/pytorch/utils/aws/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov5/pytorch/utils/aws/mime.sh b/cv/detection/yolov5/pytorch/utils/aws/mime.sh
deleted file mode 100644
index c319a83cfbdf09bea634c3bd9fca737c0b1dd505..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/aws/mime.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
-# This script will run on every instance restart, not only on first start
-# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---
-
-Content-Type: multipart/mixed; boundary="//"
-MIME-Version: 1.0
-
---//
-Content-Type: text/cloud-config; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="cloud-config.txt"
-
-#cloud-config
-cloud_final_modules:
-- [scripts-user, always]
-
---//
-Content-Type: text/x-shellscript; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="userdata.txt"
-
-#!/bin/bash
-# --- paste contents of userdata.sh here ---
---//
diff --git a/cv/detection/yolov5/pytorch/utils/aws/resume.py b/cv/detection/yolov5/pytorch/utils/aws/resume.py
deleted file mode 100644
index 4b0d4246b594acddbecf065956fc8729bb96ec36..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/aws/resume.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Resume all interrupted trainings in yolov5/ dir including DDP trainings
-# Usage: $ python utils/aws/resume.py
-
-import os
-import sys
-from pathlib import Path
-
-import torch
-import yaml
-
-sys.path.append('./') # to run '$ python *.py' files in subdirectories
-
-port = 0 # --master_port
-path = Path('').resolve()
-for last in path.rglob('*/**/last.pt'):
- ckpt = torch.load(last)
- if ckpt['optimizer'] is None:
- continue
-
- # Load opt.yaml
- with open(last.parent.parent / 'opt.yaml') as f:
- opt = yaml.safe_load(f)
-
- # Get device count
- d = opt['device'].split(',') # devices
- nd = len(d) # number of devices
- ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel
-
- if ddp: # multi-GPU
- port += 1
- cmd = f'python -m torch.distributed.launch --nproc_per_node {nd} --master_port {port} train.py --resume {last}'
- else: # single-GPU
- cmd = f'python train.py --resume {last}'
-
- cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread
- print(cmd)
- os.system(cmd)
diff --git a/cv/detection/yolov5/pytorch/utils/aws/userdata.sh b/cv/detection/yolov5/pytorch/utils/aws/userdata.sh
deleted file mode 100644
index 0c28d0a2cae05301e92cb9ed1775d7a1e1e5cc3c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/aws/userdata.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
-# This script will run only once on first instance start (for a re-start script see mime.sh)
-# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir
-# Use >300 GB SSD
-
-cd home/ubuntu
-if [ ! -d yolov5 ]; then
- echo "Running first-time script." # install dependencies, download COCO, pull Docker
- git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5
- cd yolov5
- bash data/scripts/get_coco.sh && echo "COCO done." &
- sudo docker pull ultralytics/yolov5:latest && echo "Docker done." &
- python3 -m pip3 install --upgrade pip3 && pip3 install -r requirements.txt && python detect.py && echo "Requirements done." &
- wait && echo "All tasks done." # finish background tasks
-else
- echo "Running re-start script." # resume interrupted runs
- i=0
- list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour'
- while IFS= read -r id; do
- ((i++))
- echo "restarting container $i: $id"
- sudo docker start $id
- # sudo docker exec -it $id python train.py --resume # single-GPU
- sudo docker exec -d $id python utils/aws/resume.py # multi-scenario
- done <<<"$list"
-fi
diff --git a/cv/detection/yolov5/pytorch/utils/datasets.py b/cv/detection/yolov5/pytorch/utils/datasets.py
deleted file mode 100644
index 0bcfdcc1cda6d86bfe95ad0578238228826ba1ce..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/datasets.py
+++ /dev/null
@@ -1,930 +0,0 @@
-# YOLOv5 dataset utils and dataloaders
-
-import glob
-import hashlib
-import json
-import logging
-import os
-import random
-import shutil
-import time
-from itertools import repeat
-from multiprocessing.pool import ThreadPool, Pool
-from pathlib import Path
-from threading import Thread
-
-import cv2
-import numpy as np
-import torch
-import torch.nn.functional as F
-import yaml
-from PIL import Image, ExifTags
-from torch.utils.data import Dataset
-from tqdm import tqdm
-
-from utils.augmentations import Albumentations, augment_hsv, copy_paste, letterbox, mixup, random_perspective
-from utils.general import check_requirements, check_file, check_dataset, xywh2xyxy, xywhn2xyxy, xyxy2xywhn, \
- xyn2xy, segments2boxes, clean_str
-from utils.torch_utils import torch_distributed_zero_first
-
-# Parameters
-help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
-img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes
-vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes
-num_threads = min(8, os.cpu_count()) # number of multiprocessing threads
-logger = logging.getLogger(__name__)
-
-# Get orientation exif tag
-for orientation in ExifTags.TAGS.keys():
- if ExifTags.TAGS[orientation] == 'Orientation':
- break
-
-
-def get_hash(paths):
- # Returns a single hash value of a list of paths (files or dirs)
- size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes
- h = hashlib.md5(str(size).encode()) # hash sizes
- h.update(''.join(paths).encode()) # hash paths
- return h.hexdigest() # return hash
-
-
-def exif_size(img):
- # Returns exif-corrected PIL size
- s = img.size # (width, height)
- try:
- rotation = dict(img._getexif().items())[orientation]
- if rotation == 6: # rotation 270
- s = (s[1], s[0])
- elif rotation == 8: # rotation 90
- s = (s[1], s[0])
- except:
- pass
-
- return s
-
-
-def exif_transpose(image):
- """
- Transpose a PIL image accordingly if it has an EXIF Orientation tag.
- From https://github.com/python-pillow/Pillow/blob/master/src/PIL/ImageOps.py
-
- :param image: The image to transpose.
- :return: An image.
- """
- exif = image.getexif()
- orientation = exif.get(0x0112, 1) # default 1
- if orientation > 1:
- method = {2: Image.FLIP_LEFT_RIGHT,
- 3: Image.ROTATE_180,
- 4: Image.FLIP_TOP_BOTTOM,
- 5: Image.TRANSPOSE,
- 6: Image.ROTATE_270,
- 7: Image.TRANSVERSE,
- 8: Image.ROTATE_90,
- }.get(orientation)
- if method is not None:
- image = image.transpose(method)
- del exif[0x0112]
- image.info["exif"] = exif.tobytes()
- return image
-
-
-def create_dataloader(path, imgsz, batch_size, stride, single_cls=False, hyp=None, augment=False, cache=False, pad=0.0,
- rect=False, rank=-1, workers=8, image_weights=False, quad=False, prefix=''):
- # Make sure only the first process in DDP process the dataset first, and the following others can use the cache
- with torch_distributed_zero_first(rank):
- dataset = LoadImagesAndLabels(path, imgsz, batch_size,
- augment=augment, # augment images
- hyp=hyp, # augmentation hyperparameters
- rect=rect, # rectangular training
- cache_images=cache,
- single_cls=single_cls,
- stride=int(stride),
- pad=pad,
- image_weights=image_weights,
- prefix=prefix)
-
- batch_size = min(batch_size, len(dataset))
- nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, workers]) # number of workers
- sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None
- loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader
- # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader()
- dataloader = loader(dataset,
- batch_size=batch_size,
- num_workers=nw,
- sampler=sampler,
- pin_memory=True,
- collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn)
- return dataloader, dataset
-
-
-class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
- """ Dataloader that reuses workers
-
- Uses same syntax as vanilla DataLoader
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
- self.iterator = super().__iter__()
-
- def __len__(self):
- return len(self.batch_sampler.sampler)
-
- def __iter__(self):
- for i in range(len(self)):
- yield next(self.iterator)
-
-
-class _RepeatSampler(object):
- """ Sampler that repeats forever
-
- Args:
- sampler (Sampler)
- """
-
- def __init__(self, sampler):
- self.sampler = sampler
-
- def __iter__(self):
- while True:
- yield from iter(self.sampler)
-
-
-class LoadImages: # for inference
- def __init__(self, path, img_size=640, stride=32):
- p = str(Path(path).absolute()) # os-agnostic absolute path
- if '*' in p:
- files = sorted(glob.glob(p, recursive=True)) # glob
- elif os.path.isdir(p):
- files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir
- elif os.path.isfile(p):
- files = [p] # files
- else:
- raise Exception(f'ERROR: {p} does not exist')
-
- images = [x for x in files if x.split('.')[-1].lower() in img_formats]
- videos = [x for x in files if x.split('.')[-1].lower() in vid_formats]
- ni, nv = len(images), len(videos)
-
- self.img_size = img_size
- self.stride = stride
- self.files = images + videos
- self.nf = ni + nv # number of files
- self.video_flag = [False] * ni + [True] * nv
- self.mode = 'image'
- if any(videos):
- self.new_video(videos[0]) # new video
- else:
- self.cap = None
- assert self.nf > 0, f'No images or videos found in {p}. ' \
- f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}'
-
- def __iter__(self):
- self.count = 0
- return self
-
- def __next__(self):
- if self.count == self.nf:
- raise StopIteration
- path = self.files[self.count]
-
- if self.video_flag[self.count]:
- # Read video
- self.mode = 'video'
- ret_val, img0 = self.cap.read()
- if not ret_val:
- self.count += 1
- self.cap.release()
- if self.count == self.nf: # last video
- raise StopIteration
- else:
- path = self.files[self.count]
- self.new_video(path)
- ret_val, img0 = self.cap.read()
-
- self.frame += 1
- print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: ', end='')
-
- else:
- # Read image
- self.count += 1
- img0 = cv2.imread(path) # BGR
- assert img0 is not None, 'Image Not Found ' + path
- print(f'image {self.count}/{self.nf} {path}: ', end='')
-
- # Padded resize
- img = letterbox(img0, self.img_size, stride=self.stride)[0]
-
- # Convert
- img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
- img = np.ascontiguousarray(img)
-
- return path, img, img0, self.cap
-
- def new_video(self, path):
- self.frame = 0
- self.cap = cv2.VideoCapture(path)
- self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
-
- def __len__(self):
- return self.nf # number of files
-
-
-class LoadWebcam: # for inference
- def __init__(self, pipe='0', img_size=640, stride=32):
- self.img_size = img_size
- self.stride = stride
- self.pipe = eval(pipe) if pipe.isnumeric() else pipe
- self.cap = cv2.VideoCapture(self.pipe) # video capture object
- self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size
-
- def __iter__(self):
- self.count = -1
- return self
-
- def __next__(self):
- self.count += 1
- if cv2.waitKey(1) == ord('q'): # q to quit
- self.cap.release()
- cv2.destroyAllWindows()
- raise StopIteration
-
- # Read frame
- ret_val, img0 = self.cap.read()
- img0 = cv2.flip(img0, 1) # flip left-right
-
- # Print
- assert ret_val, f'Camera Error {self.pipe}'
- img_path = 'webcam.jpg'
- print(f'webcam {self.count}: ', end='')
-
- # Padded resize
- img = letterbox(img0, self.img_size, stride=self.stride)[0]
-
- # Convert
- img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
- img = np.ascontiguousarray(img)
-
- return img_path, img, img0, None
-
- def __len__(self):
- return 0
-
-
-class LoadStreams: # multiple IP or RTSP cameras
- def __init__(self, sources='streams.txt', img_size=640, stride=32):
- self.mode = 'stream'
- self.img_size = img_size
- self.stride = stride
-
- if os.path.isfile(sources):
- with open(sources, 'r') as f:
- sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())]
- else:
- sources = [sources]
-
- n = len(sources)
- self.imgs, self.fps, self.frames, self.threads = [None] * n, [0] * n, [0] * n, [None] * n
- self.sources = [clean_str(x) for x in sources] # clean source names for later
- for i, s in enumerate(sources): # index, source
- # Start thread to read frames from video stream
- print(f'{i + 1}/{n}: {s}... ', end='')
- if 'youtube.com/' in s or 'youtu.be/' in s: # if source is YouTube video
- check_requirements(('pafy', 'youtube_dl'))
- import pafy
- s = pafy.new(s).getbest(preftype="mp4").url # YouTube URL
- s = eval(s) if s.isnumeric() else s # i.e. s = '0' local webcam
- cap = cv2.VideoCapture(s)
- assert cap.isOpened(), f'Failed to open {s}'
- w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- self.fps[i] = max(cap.get(cv2.CAP_PROP_FPS) % 100, 0) or 30.0 # 30 FPS fallback
- self.frames[i] = max(int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 0) or float('inf') # infinite stream fallback
-
- _, self.imgs[i] = cap.read() # guarantee first frame
- self.threads[i] = Thread(target=self.update, args=([i, cap]), daemon=True)
- print(f" success ({self.frames[i]} frames {w}x{h} at {self.fps[i]:.2f} FPS)")
- self.threads[i].start()
- print('') # newline
-
- # check for common shapes
- s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes
- self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
- if not self.rect:
- print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
-
- def update(self, i, cap):
- # Read stream `i` frames in daemon thread
- n, f, read = 0, self.frames[i], 1 # frame number, frame array, inference every 'read' frame
- while cap.isOpened() and n < f:
- n += 1
- # _, self.imgs[index] = cap.read()
- cap.grab()
- if n % read == 0:
- success, im = cap.retrieve()
- self.imgs[i] = im if success else self.imgs[i] * 0
- time.sleep(1 / self.fps[i]) # wait time
-
- def __iter__(self):
- self.count = -1
- return self
-
- def __next__(self):
- self.count += 1
- if not all(x.is_alive() for x in self.threads) or cv2.waitKey(1) == ord('q'): # q to quit
- cv2.destroyAllWindows()
- raise StopIteration
-
- # Letterbox
- img0 = self.imgs.copy()
- img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0]
-
- # Stack
- img = np.stack(img, 0)
-
- # Convert
- img = img[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW
- img = np.ascontiguousarray(img)
-
- return self.sources, img, img0, None
-
- def __len__(self):
- return len(self.sources) # 1E12 frames = 32 streams at 30 FPS for 30 years
-
-
-def img2label_paths(img_paths):
- # Define label paths as a function of image paths
- sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings
- return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths]
-
-
-class LoadImagesAndLabels(Dataset): # for training/testing
- def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,
- cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''):
- self.img_size = img_size
- self.augment = augment
- self.hyp = hyp
- self.image_weights = image_weights
- self.rect = False if image_weights else rect
- self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training)
- self.mosaic_border = [-img_size // 2, -img_size // 2]
- self.stride = stride
- self.path = path
- self.albumentations = Albumentations() if augment else None
-
- try:
- f = [] # image files
- for p in path if isinstance(path, list) else [path]:
- p = Path(p) # os-agnostic
- if p.is_dir(): # dir
- f += glob.glob(str(p / '**' / '*.*'), recursive=True)
- # f = list(p.rglob('**/*.*')) # pathlib
- elif p.is_file(): # file
- with open(p, 'r') as t:
- t = t.read().strip().splitlines()
- parent = str(p.parent) + os.sep
- f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path
- # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib)
- else:
- raise Exception(f'{prefix}{p} does not exist')
- self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats])
- # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib
- assert self.img_files, f'{prefix}No images found'
- except Exception as e:
- raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}')
-
- # Check cache
- self.label_files = img2label_paths(self.img_files) # labels
- cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels
- if cache_path.is_file():
- cache, exists = torch.load(cache_path), True # load
- if cache.get('version') != 0.3 or cache.get('hash') != get_hash(self.label_files + self.img_files):
- cache, exists = self.cache_labels(cache_path, prefix), False # re-cache
- else:
- cache, exists = self.cache_labels(cache_path, prefix), False # cache
-
- # Display cache
- nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total
- if exists:
- d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
- tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results
- if cache['msgs']:
- logging.info('\n'.join(cache['msgs'])) # display warnings
- assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}'
-
- # Read cache
- [cache.pop(k) for k in ('hash', 'version', 'msgs')] # remove items
- labels, shapes, self.segments = zip(*cache.values())
- self.labels = list(labels)
- self.shapes = np.array(shapes, dtype=np.float64)
- self.img_files = list(cache.keys()) # update
- self.label_files = img2label_paths(cache.keys()) # update
- if single_cls:
- for x in self.labels:
- x[:, 0] = 0
-
- n = len(shapes) # number of images
- bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index
- nb = bi[-1] + 1 # number of batches
- self.batch = bi # batch index of image
- self.n = n
- self.indices = range(n)
-
- # Rectangular Training
- if self.rect:
- # Sort by aspect ratio
- s = self.shapes # wh
- ar = s[:, 1] / s[:, 0] # aspect ratio
- irect = ar.argsort()
- self.img_files = [self.img_files[i] for i in irect]
- self.label_files = [self.label_files[i] for i in irect]
- self.labels = [self.labels[i] for i in irect]
- self.shapes = s[irect] # wh
- ar = ar[irect]
-
- # Set training image shapes
- shapes = [[1, 1]] * nb
- for i in range(nb):
- ari = ar[bi == i]
- mini, maxi = ari.min(), ari.max()
- if maxi < 1:
- shapes[i] = [maxi, 1]
- elif mini > 1:
- shapes[i] = [1, 1 / mini]
-
- self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride
-
- # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
- self.imgs = [None] * n
- if cache_images:
- gb = 0 # Gigabytes of cached images
- self.img_hw0, self.img_hw = [None] * n, [None] * n
- results = ThreadPool(num_threads).imap(lambda x: load_image(*x), zip(repeat(self), range(n)))
- pbar = tqdm(enumerate(results), total=n)
- for i, x in pbar:
- self.imgs[i], self.img_hw0[i], self.img_hw[i] = x # img, hw_original, hw_resized = load_image(self, i)
- gb += self.imgs[i].nbytes
- pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)'
- pbar.close()
-
- def cache_labels(self, path=Path('./labels.cache'), prefix=''):
- # Cache dataset labels, check images and read shapes
- x = {} # dict
- nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number missing, found, empty, corrupt, messages
- desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels..."
- with Pool(num_threads) as pool:
- pbar = tqdm(pool.imap_unordered(verify_image_label, zip(self.img_files, self.label_files, repeat(prefix))),
- desc=desc, total=len(self.img_files))
- for im_file, l, shape, segments, nm_f, nf_f, ne_f, nc_f, msg in pbar:
- nm += nm_f
- nf += nf_f
- ne += ne_f
- nc += nc_f
- if im_file:
- x[im_file] = [l, shape, segments]
- if msg:
- msgs.append(msg)
- pbar.desc = f"{desc}{nf} found, {nm} missing, {ne} empty, {nc} corrupted"
-
- pbar.close()
- if msgs:
- logging.info('\n'.join(msgs))
- if nf == 0:
- logging.info(f'{prefix}WARNING: No labels found in {path}. See {help_url}')
- x['hash'] = get_hash(self.label_files + self.img_files)
- x['results'] = nf, nm, ne, nc, len(self.img_files)
- x['msgs'] = msgs # warnings
- x['version'] = 0.3 # cache version
- try:
- torch.save(x, path) # save cache for next time
- logging.info(f'{prefix}New cache created: {path}')
- except Exception as e:
- logging.info(f'{prefix}WARNING: Cache directory {path.parent} is not writeable: {e}') # path not writeable
- return x
-
- def __len__(self):
- return len(self.img_files)
-
- # def __iter__(self):
- # self.count = -1
- # print('ran dataset iter')
- # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
- # return self
-
- def __getitem__(self, index):
- index = self.indices[index] # linear, shuffled, or image_weights
-
- hyp = self.hyp
- mosaic = self.mosaic and random.random() < hyp['mosaic']
- if mosaic:
- # Load mosaic
- img, labels = load_mosaic(self, index)
- shapes = None
-
- # MixUp augmentation
- if random.random() < hyp['mixup']:
- img, labels = mixup(img, labels, *load_mosaic(self, random.randint(0, self.n - 1)))
-
- else:
- # Load image
- img, (h0, w0), (h, w) = load_image(self, index)
-
- # Letterbox
- shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
- img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
- shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
-
- labels = self.labels[index].copy()
- if labels.size: # normalized xywh to pixel xyxy format
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1])
-
- if self.augment:
- img, labels = random_perspective(img, labels,
- degrees=hyp['degrees'],
- translate=hyp['translate'],
- scale=hyp['scale'],
- shear=hyp['shear'],
- perspective=hyp['perspective'])
-
- nl = len(labels) # number of labels
- if nl:
- labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0]) # xyxy to xywh normalized
-
- if self.augment:
- # Albumentations
- img, labels = self.albumentations(img, labels)
-
- # HSV color-space
- augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])
-
- # Flip up-down
- if random.random() < hyp['flipud']:
- img = np.flipud(img)
- if nl:
- labels[:, 2] = 1 - labels[:, 2]
-
- # Flip left-right
- if random.random() < hyp['fliplr']:
- img = np.fliplr(img)
- if nl:
- labels[:, 1] = 1 - labels[:, 1]
-
- # Cutouts
- # if random.random() < 0.9:
- # labels = cutout(img, labels)
-
- labels_out = torch.zeros((nl, 6))
- if nl:
- labels_out[:, 1:] = torch.from_numpy(labels)
-
- # Convert
- img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
- img = np.ascontiguousarray(img)
-
- return torch.from_numpy(img), labels_out, self.img_files[index], shapes
-
- @staticmethod
- def collate_fn(batch):
- img, label, path, shapes = zip(*batch) # transposed
- for i, l in enumerate(label):
- l[:, 0] = i # add target image index for build_targets()
- return torch.stack(img, 0), torch.cat(label, 0), path, shapes
-
- @staticmethod
- def collate_fn4(batch):
- img, label, path, shapes = zip(*batch) # transposed
- n = len(shapes) // 4
- img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n]
-
- ho = torch.tensor([[0., 0, 0, 1, 0, 0]])
- wo = torch.tensor([[0., 0, 1, 0, 0, 0]])
- s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale
- for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW
- i *= 4
- if random.random() < 0.5:
- im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[
- 0].type(img[i].type())
- l = label[i]
- else:
- im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2)
- l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s
- img4.append(im)
- label4.append(l)
-
- for i, l in enumerate(label4):
- l[:, 0] = i # add target image index for build_targets()
-
- return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4
-
-
-# Ancillary functions --------------------------------------------------------------------------------------------------
-def load_image(self, index):
- # loads 1 image from dataset, returns img, original hw, resized hw
- img = self.imgs[index]
- if img is None: # not cached
- path = self.img_files[index]
- img = cv2.imread(path) # BGR
- assert img is not None, 'Image Not Found ' + path
- h0, w0 = img.shape[:2] # orig hw
- r = self.img_size / max(h0, w0) # ratio
- if r != 1: # if sizes are not equal
- img = cv2.resize(img, (int(w0 * r), int(h0 * r)),
- interpolation=cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR)
- return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized
- else:
- return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized
-
-
-def load_mosaic(self, index):
- # loads images in a 4-mosaic
-
- labels4, segments4 = [], []
- s = self.img_size
- yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
- indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img4
- if i == 0: # top left
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
- labels4.append(labels)
- segments4.extend(segments)
-
- # Concat/clip labels
- labels4 = np.concatenate(labels4, 0)
- for x in (labels4[:, 1:], *segments4):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img4, labels4 = replicate(img4, labels4) # replicate
-
- # Augment
- img4, labels4, segments4 = copy_paste(img4, labels4, segments4, probability=self.hyp['copy_paste'])
- img4, labels4 = random_perspective(img4, labels4, segments4,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
-
- return img4, labels4
-
-
-def load_mosaic9(self, index):
- # loads images in a 9-mosaic
-
- labels9, segments9 = [], []
- s = self.img_size
- indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img9
- if i == 0: # center
- img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- h0, w0 = h, w
- c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates
- elif i == 1: # top
- c = s, s - h, s + w, s
- elif i == 2: # top right
- c = s + wp, s - h, s + wp + w, s
- elif i == 3: # right
- c = s + w0, s, s + w0 + w, s + h
- elif i == 4: # bottom right
- c = s + w0, s + hp, s + w0 + w, s + hp + h
- elif i == 5: # bottom
- c = s + w0 - w, s + h0, s + w0, s + h0 + h
- elif i == 6: # bottom left
- c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h
- elif i == 7: # left
- c = s - w, s + h0 - h, s, s + h0
- elif i == 8: # top left
- c = s - w, s + h0 - hp - h, s, s + h0 - hp
-
- padx, pady = c[:2]
- x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padx, pady) for x in segments]
- labels9.append(labels)
- segments9.extend(segments)
-
- # Image
- img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax]
- hp, wp = h, w # height, width previous
-
- # Offset
- yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y
- img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s]
-
- # Concat/clip labels
- labels9 = np.concatenate(labels9, 0)
- labels9[:, [1, 3]] -= xc
- labels9[:, [2, 4]] -= yc
- c = np.array([xc, yc]) # centers
- segments9 = [x - c for x in segments9]
-
- for x in (labels9[:, 1:], *segments9):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img9, labels9 = replicate(img9, labels9) # replicate
-
- # Augment
- img9, labels9 = random_perspective(img9, labels9, segments9,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
-
- return img9, labels9
-
-
-def create_folder(path='./new'):
- # Create folder
- if os.path.exists(path):
- shutil.rmtree(path) # delete output folder
- os.makedirs(path) # make new output folder
-
-
-def flatten_recursive(path='../datasets/coco128'):
- # Flatten a recursive directory by bringing all files to top level
- new_path = Path(path + '_flat')
- create_folder(new_path)
- for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)):
- shutil.copyfile(file, new_path / Path(file).name)
-
-
-def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes()
- # Convert detection dataset into classification dataset, with one directory per class
- path = Path(path) # images dir
- shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing
- files = list(path.rglob('*.*'))
- n = len(files) # number of files
- for im_file in tqdm(files, total=n):
- if im_file.suffix[1:] in img_formats:
- # image
- im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB
- h, w = im.shape[:2]
-
- # labels
- lb_file = Path(img2label_paths([str(im_file)])[0])
- if Path(lb_file).exists():
- with open(lb_file, 'r') as f:
- lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
-
- for j, x in enumerate(lb):
- c = int(x[0]) # class
- f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename
- if not f.parent.is_dir():
- f.parent.mkdir(parents=True)
-
- b = x[1:] * [w, h, w, h] # box
- # b[2:] = b[2:].max() # rectangle to square
- b[2:] = b[2:] * 1.2 + 3 # pad
- b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)
-
- b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image
- b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
- assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}'
-
-
-def autosplit(path='../datasets/coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False):
- """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files
- Usage: from utils.datasets import *; autosplit()
- Arguments
- path: Path to images directory
- weights: Train, val, test weights (list, tuple)
- annotated_only: Only use images with an annotated txt file
- """
- path = Path(path) # images dir
- files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in img_formats], []) # image files only
- n = len(files) # number of files
- random.seed(0) # for reproducibility
- indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split
-
- txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files
- [(path.parent / x).unlink(missing_ok=True) for x in txt] # remove existing
-
- print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only)
- for i, img in tqdm(zip(indices, files), total=n):
- if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label
- with open(path.parent / txt[i], 'a') as f:
- f.write('./' + img.relative_to(path.parent).as_posix() + '\n') # add image to txt file
-
-
-def verify_image_label(args):
- # Verify one image-label pair
- im_file, lb_file, prefix = args
- nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, corrupt
- try:
- # verify images
- im = Image.open(im_file)
- im.verify() # PIL verify
- shape = exif_size(im) # image size
- assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels'
- assert im.format.lower() in img_formats, f'invalid image format {im.format}'
- if im.format.lower() in ('jpg', 'jpeg'):
- with open(im_file, 'rb') as f:
- f.seek(-2, 2)
- assert f.read() == b'\xff\xd9', 'corrupted JPEG'
-
- # verify labels
- segments = [] # instance segments
- if os.path.isfile(lb_file):
- nf = 1 # label found
- with open(lb_file, 'r') as f:
- l = [x.split() for x in f.read().strip().splitlines() if len(x)]
- if any([len(x) > 8 for x in l]): # is segment
- classes = np.array([x[0] for x in l], dtype=np.float32)
- segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...)
- l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
- l = np.array(l, dtype=np.float32)
- if len(l):
- assert l.shape[1] == 5, 'labels require 5 columns each'
- assert (l >= 0).all(), 'negative labels'
- assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels'
- assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels'
- else:
- ne = 1 # label empty
- l = np.zeros((0, 5), dtype=np.float32)
- else:
- nm = 1 # label missing
- l = np.zeros((0, 5), dtype=np.float32)
- return im_file, l, shape, segments, nm, nf, ne, nc, ''
- except Exception as e:
- nc = 1
- msg = f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}'
- return [None, None, None, None, nm, nf, ne, nc, msg]
-
-
-def dataset_stats(path='coco128.yaml', autodownload=False, verbose=False):
- """ Return dataset statistics dictionary with images and instances counts per split per class
- Usage: from utils.datasets import *; dataset_stats('coco128.yaml', verbose=True)
- Arguments
- path: Path to data.yaml
- autodownload: Attempt to download dataset if not found locally
- verbose: Print stats dictionary
- """
-
- def round_labels(labels):
- # Update labels to integer class and 6 decimal place floats
- return [[int(c), *[round(x, 6) for x in points]] for c, *points in labels]
-
- with open(check_file(path)) as f:
- data = yaml.safe_load(f) # data dict
- check_dataset(data, autodownload) # download dataset if missing
- nc = data['nc'] # number of classes
- stats = {'nc': nc, 'names': data['names']} # statistics dictionary
- for split in 'train', 'val', 'test':
- if data.get(split) is None:
- stats[split] = None # i.e. no test set
- continue
- x = []
- dataset = LoadImagesAndLabels(data[split], augment=False, rect=True) # load dataset
- if split == 'train':
- cache_path = Path(dataset.label_files[0]).parent.with_suffix('.cache') # *.cache path
- for label in tqdm(dataset.labels, total=dataset.n, desc='Statistics'):
- x.append(np.bincount(label[:, 0].astype(int), minlength=nc))
- x = np.array(x) # shape(128x80)
- stats[split] = {'instance_stats': {'total': int(x.sum()), 'per_class': x.sum(0).tolist()},
- 'image_stats': {'total': dataset.n, 'unlabelled': int(np.all(x == 0, 1).sum()),
- 'per_class': (x > 0).sum(0).tolist()},
- 'labels': [{str(Path(k).name): round_labels(v.tolist())} for k, v in
- zip(dataset.img_files, dataset.labels)]}
-
- # Save, print and return
- with open(cache_path.with_suffix('.json'), 'w') as f:
- json.dump(stats, f) # save stats *.json
- if verbose:
- print(json.dumps(stats, indent=2, sort_keys=False))
- # print(yaml.dump([stats], sort_keys=False, default_flow_style=False))
- return stats
diff --git a/cv/detection/yolov5/pytorch/utils/flask_rest_api/README.md b/cv/detection/yolov5/pytorch/utils/flask_rest_api/README.md
deleted file mode 100644
index 324c2416dcd9fa83b18286c33ce309f4f5573637..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/flask_rest_api/README.md
+++ /dev/null
@@ -1,68 +0,0 @@
-# Flask REST API
-[REST](https://en.wikipedia.org/wiki/Representational_state_transfer) [API](https://en.wikipedia.org/wiki/API)s are commonly used to expose Machine Learning (ML) models to other services. This folder contains an example REST API created using Flask to expose the YOLOv5s model from [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/).
-
-## Requirements
-
-[Flask](https://palletsprojects.com/p/flask/) is required. Install with:
-```shell
-$ pip install Flask
-```
-
-## Run
-
-After Flask installation run:
-
-```shell
-$ python3 restapi.py --port 5000
-```
-
-Then use [curl](https://curl.se/) to perform a request:
-
-```shell
-$ curl -X POST -F image=@zidane.jpg 'http://localhost:5000/v1/object-detection/yolov5s'`
-```
-
-The model inference results are returned as a JSON response:
-
-```json
-[
- {
- "class": 0,
- "confidence": 0.8900438547,
- "height": 0.9318675399,
- "name": "person",
- "width": 0.3264600933,
- "xcenter": 0.7438579798,
- "ycenter": 0.5207948685
- },
- {
- "class": 0,
- "confidence": 0.8440024257,
- "height": 0.7155083418,
- "name": "person",
- "width": 0.6546785235,
- "xcenter": 0.427829951,
- "ycenter": 0.6334488392
- },
- {
- "class": 27,
- "confidence": 0.3771208823,
- "height": 0.3902671337,
- "name": "tie",
- "width": 0.0696444362,
- "xcenter": 0.3675483763,
- "ycenter": 0.7991207838
- },
- {
- "class": 27,
- "confidence": 0.3527112305,
- "height": 0.1540903747,
- "name": "tie",
- "width": 0.0336618312,
- "xcenter": 0.7814827561,
- "ycenter": 0.5065554976
- }
-]
-```
-
-An example python script to perform inference using [requests](https://docs.python-requests.org/en/master/) is given in `example_request.py`
diff --git a/cv/detection/yolov5/pytorch/utils/flask_rest_api/example_request.py b/cv/detection/yolov5/pytorch/utils/flask_rest_api/example_request.py
deleted file mode 100644
index ff21f30f93ca37578ce45366a1ddbe3f3eadaa79..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/flask_rest_api/example_request.py
+++ /dev/null
@@ -1,13 +0,0 @@
-"""Perform test request"""
-import pprint
-
-import requests
-
-DETECTION_URL = "http://localhost:5000/v1/object-detection/yolov5s"
-TEST_IMAGE = "zidane.jpg"
-
-image_data = open(TEST_IMAGE, "rb").read()
-
-response = requests.post(DETECTION_URL, files={"image": image_data}).json()
-
-pprint.pprint(response)
diff --git a/cv/detection/yolov5/pytorch/utils/flask_rest_api/restapi.py b/cv/detection/yolov5/pytorch/utils/flask_rest_api/restapi.py
deleted file mode 100644
index a54e2309715ce5d3d41e9e2e76a347db3cdb7ccb..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/flask_rest_api/restapi.py
+++ /dev/null
@@ -1,37 +0,0 @@
-"""
-Run a rest API exposing the yolov5s object detection model
-"""
-import argparse
-import io
-
-import torch
-from PIL import Image
-from flask import Flask, request
-
-app = Flask(__name__)
-
-DETECTION_URL = "/v1/object-detection/yolov5s"
-
-
-@app.route(DETECTION_URL, methods=["POST"])
-def predict():
- if not request.method == "POST":
- return
-
- if request.files.get("image"):
- image_file = request.files["image"]
- image_bytes = image_file.read()
-
- img = Image.open(io.BytesIO(image_bytes))
-
- results = model(img, size=640) # reduce size=320 for faster inference
- return results.pandas().xyxy[0].to_json(orient="records")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Flask API exposing YOLOv5 model")
- parser.add_argument("--port", default=5000, type=int, help="port number")
- args = parser.parse_args()
-
- model = torch.hub.load("ultralytics/yolov5", "yolov5s", force_reload=True) # force_reload to recache
- app.run(host="0.0.0.0", port=args.port) # debug=True causes Restarting with stat
diff --git a/cv/detection/yolov5/pytorch/utils/general.py b/cv/detection/yolov5/pytorch/utils/general.py
deleted file mode 100644
index f9b89612a0495676ea4f22d496a03a15087f8af8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/general.py
+++ /dev/null
@@ -1,682 +0,0 @@
-# YOLOv5 general utils
-
-import contextlib
-import glob
-import logging
-import os
-import platform
-import random
-import re
-import signal
-import time
-import urllib
-from itertools import repeat
-from multiprocessing.pool import ThreadPool
-from pathlib import Path
-from subprocess import check_output
-
-import cv2
-import math
-import numpy as np
-import pandas as pd
-import pkg_resources as pkg
-import torch
-import torchvision
-import yaml
-
-from utils.google_utils import gsutil_getsize
-from utils.metrics import box_iou, fitness
-from utils.torch_utils import init_torch_seeds
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-class timeout(contextlib.ContextDecorator):
- # Usage: @timeout(seconds) decorator or 'with timeout(seconds):' context manager
- def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors=True):
- self.seconds = int(seconds)
- self.timeout_message = timeout_msg
- self.suppress = bool(suppress_timeout_errors)
-
- def _timeout_handler(self, signum, frame):
- raise TimeoutError(self.timeout_message)
-
- def __enter__(self):
- signal.signal(signal.SIGALRM, self._timeout_handler) # Set handler for SIGALRM
- signal.alarm(self.seconds) # start countdown for SIGALRM to be raised
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- signal.alarm(0) # Cancel SIGALRM if it's scheduled
- if self.suppress and exc_type is TimeoutError: # Suppress TimeoutError
- return True
-
-
-def set_logging(rank=-1, verbose=True):
- logging.basicConfig(
- format="%(message)s",
- level=logging.INFO if (verbose and rank in [-1, 0]) else logging.WARN)
-
-
-def init_seeds(seed=0):
- # Initialize random number generator (RNG) seeds
- random.seed(seed)
- np.random.seed(seed)
- init_torch_seeds(seed)
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def is_docker():
- # Is environment a Docker container?
- return Path('/workspace').exists() # or Path('/.dockerenv').exists()
-
-
-def is_colab():
- # Is environment a Google Colab instance?
- try:
- import google.colab
- return True
- except Exception as e:
- return False
-
-
-def is_pip():
- # Is file in a pip package?
- return 'site-packages' in Path(__file__).absolute().parts
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-def file_size(file):
- # Return file size in MB
- return Path(file).stat().st_size / 1e6
-
-
-def check_online():
- # Check internet connectivity
- import socket
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accessibility
- return True
- except OSError:
- return False
-
-
-def check_git_status(err_msg=', for updates see https://github.com/ultralytics/yolov5'):
- # Recommend 'git pull' if code is out of date
-# print(colorstr('github: '), end='')
-# try:
-# assert Path('.git').exists(), 'skipping check (not a git repository)'
-# assert not is_docker(), 'skipping check (Docker image)'
-# assert check_online(), 'skipping check (offline)'
-#
-# cmd = 'git fetch && git config --get remote.origin.url'
-# url = check_output(cmd, shell=True, timeout=5).decode().strip().rstrip('.git') # git fetch
-# branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
-# n = int(check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
-# if n > 0:
-# s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \
-# f"Use 'git pull' to update or 'git clone {url}' to download latest."
-# else:
-# s = f'up to date with {url} ✅'
-# print(emojis(s)) # emoji-safe
-# except Exception as e:
-# print(f'{e}{err_msg}')
- print("no need to check git status")
-
-def check_python(minimum='3.6.2'):
- # Check current python version vs. required python version
- check_version(platform.python_version(), minimum, name='Python ')
-
-
-def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False):
- # Check version vs. required version
- current, minimum = (pkg.parse_version(x) for x in (current, minimum))
- result = (current == minimum) if pinned else (current >= minimum)
- assert result, f'{name}{minimum} required by YOLOv5, but {name}{current} is currently installed'
-
-
-def check_requirements(requirements='requirements.txt', exclude=()):
- # TODO: Remove `return` to auto install packages
- return
- # Check installed dependencies meet requirements (pass *.txt file or list of packages)
- prefix = colorstr('red', 'bold', 'requirements:')
- check_python() # check python version
- if isinstance(requirements, (str, Path)): # requirements.txt file
- file = Path(requirements)
- if not file.exists():
- print(f"{prefix} {file.resolve()} not found, check failed.")
- return
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude]
- else: # list or tuple of packages
- requirements = [x for x in requirements if x not in exclude]
-
- n = 0 # number of packages updates
- for r in requirements:
- try:
- pkg.require(r)
- except Exception as e: # DistributionNotFound or VersionConflict if requirements not met
- print(f"{prefix} {r} not found and is required by YOLOv5, attempting auto-update...")
- try:
- assert check_online(), f"'pip install {r}' skipped (offline)"
- print(check_output(f"pip install '{r}'", shell=True).decode())
- n += 1
- except Exception as e:
- print(f'{prefix} {e}')
-
- if n: # if packages updated
- source = file.resolve() if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- #print(emojis(s)) # emoji-safe
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
- return new_size
-
-
-def check_imshow():
- # Check if environment supports image displays
- try:
- assert not is_docker(), 'cv2.imshow() is disabled in Docker environments'
- assert not is_colab(), 'cv2.imshow() is disabled in Google Colab environments'
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
- return False
-
-
-def check_file(file):
- # Search/download file (if necessary) and return path
- file = str(file) # convert to str()
- if Path(file).is_file() or file == '': # exists
- return file
- elif file.startswith(('http:/', 'https:/')): # download
- url = str(Path(file)).replace(':/', '://') # Pathlib turns :// -> :/
- file = Path(urllib.parse.unquote(file)).name.split('?')[0] # '%2F' to '/', split https://url.com/file.txt?auth
- print(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, file)
- assert Path(file).exists() and Path(file).stat().st_size > 0, f'File download failed: {url}' # check
- return file
- else: # search
- files = glob.glob('./**/' + file, recursive=True) # find file
- assert len(files), f'File not found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_dataset(data, autodownload=True):
- # Download dataset if not found locally
- path = Path(data.get('path', '')) # optional 'path' field
- if path:
- for k in 'train', 'val', 'test':
- if data.get(k): # prepend path
- data[k] = str(path / data[k]) if isinstance(data[k], str) else [str(path / x) for x in data[k]]
-
- train, val, test, s = [data.get(x) for x in ('train', 'val', 'test', 'download')]
- if val:
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])
- if s and autodownload: # download script
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- print(f'Downloading {s} ...')
- torch.hub.download_url_to_file(s, f)
- root = path.parent if 'path' in data else '..' # unzip directory i.e. '../'
- Path(root).mkdir(parents=True, exist_ok=True) # create root
- r = os.system(f'unzip -q {f} -d {root} && rm {f}') # unzip
- elif s.startswith('bash '): # bash script
- print(f'Running {s} ...')
- r = os.system(s)
- else: # python script
- r = exec(s, {'yaml': data}) # return None
- print('Dataset autodownload %s\n' % ('success' if r in (0, None) else 'failure')) # print result
- else:
- raise Exception('Dataset not found.')
-
-
-def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1):
- # Multi-threaded file download and unzip function
- def download_one(url, dir):
- # Download 1 file
- f = dir / Path(url).name # filename
- if not f.exists():
- print(f'Downloading {url} to {f}...')
- if curl:
- os.system(f"curl -L '{url}' -o '{f}' --retry 9 -C -") # curl download, retry and resume on fail
- else:
- torch.hub.download_url_to_file(url, f, progress=True) # torch download
- if unzip and f.suffix in ('.zip', '.gz'):
- print(f'Unzipping {f}...')
- if f.suffix == '.zip':
- s = f'unzip -qo {f} -d {dir}' # unzip -quiet -overwrite
- elif f.suffix == '.gz':
- s = f'tar xfz {f} --directory {f.parent}' # unzip
- if delete: # delete zip file after unzip
- s += f' && rm {f}'
- os.system(s)
-
- dir = Path(dir)
- dir.mkdir(parents=True, exist_ok=True) # make directory
- if threads > 1:
- pool = ThreadPool(threads)
- pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multi-threaded
- pool.close()
- pool.join()
- else:
- for u in tuple(url) if isinstance(url, str) else url:
- download_one(u, dir)
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(np.int) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights)
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels])
- image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)
- # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample
- return image_weights
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyxy2xywhn(x, w=640, h=640, clip=False):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right
- if clip:
- clip_coords(x, (h, w)) # warning: inplace clip
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = ((x[:, 0] + x[:, 2]) / 2) / w # x center
- y[:, 1] = ((x[:, 1] + x[:, 3]) / 2) / h # y center
- y[:, 2] = (x[:, 2] - x[:, 0]) / w # width
- y[:, 3] = (x[:, 3] - x[:, 1]) / h # height
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- if isinstance(boxes, torch.Tensor):
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
- else: # np.array
- boxes[:, 0].clip(0, img_shape[1], out=boxes[:, 0]) # x1
- boxes[:, 1].clip(0, img_shape[0], out=boxes[:, 1]) # y1
- boxes[:, 2].clip(0, img_shape[1], out=boxes[:, 2]) # x2
- boxes[:, 3].clip(0, img_shape[0], out=boxes[:, 3]) # y2
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=(), max_det=300):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Checks
- assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
- assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
- # Print mutation results to evolve.txt (for use with train.py --evolve)
- a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys
- b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c))
-
- if bucket:
- url = 'gs://%s/evolve.txt' % bucket
- if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0):
- os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local
-
- with open('evolve.txt', 'a') as f: # append result
- f.write(c + b + '\n')
- x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows
- x = x[np.argsort(-fitness(x))] # sort
- np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness
-
- # Save yaml
- for i, k in enumerate(hyp.keys()):
- hyp[k] = float(x[0, i + 7])
- with open(yaml_file, 'w') as f:
- results = tuple(x[0, :7])
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n')
- yaml.safe_dump(hyp, f, sort_keys=False)
-
- if bucket:
- os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload
-
-
-def apply_classifier(x, model, img, im0):
- # Apply a second stage classifier to yolo outputs
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for j, a in enumerate(d): # per item
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
- # cv2.imwrite('test%i.jpg' % j, cutout)
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255.0 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def save_one_box(xyxy, im, file='image.jpg', gain=1.02, pad=10, square=False, BGR=False, save=True):
- # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop
- xyxy = torch.tensor(xyxy).view(-1, 4)
- b = xyxy2xywh(xyxy) # boxes
- if square:
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square
- b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad
- xyxy = xywh2xyxy(b).long()
- clip_coords(xyxy, im.shape)
- crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)]
- if save:
- cv2.imwrite(str(increment_path(file, mkdir=True).with_suffix('.jpg')), crop)
- return crop
-
-
-def increment_path(path, exist_ok=False, sep='', mkdir=False):
- # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc.
- path = Path(path) # os-agnostic
- if path.exists() and not exist_ok:
- suffix = path.suffix
- path = path.with_suffix('')
- dirs = glob.glob(f"{path}{sep}*") # similar paths
- matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
- i = [int(m.groups()[0]) for m in matches if m] # indices
- n = max(i) + 1 if i else 2 # increment number
- path = Path(f"{path}{sep}{n}{suffix}") # update path
- dir = path if path.suffix == '' else path.parent # directory
- if not dir.exists() and mkdir:
- dir.mkdir(parents=True, exist_ok=True) # make directory
- return path
diff --git a/cv/detection/yolov5/pytorch/utils/google_app_engine/Dockerfile b/cv/detection/yolov5/pytorch/utils/google_app_engine/Dockerfile
deleted file mode 100644
index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/google_app_engine/Dockerfile
+++ /dev/null
@@ -1,25 +0,0 @@
-FROM gcr.io/google-appengine/python
-
-# Create a virtualenv for dependencies. This isolates these packages from
-# system-level packages.
-# Use -p python3 or -p python3.7 to select python version. Default is version 2.
-RUN virtualenv /env -p python3
-
-# Setting these environment variables are the same as running
-# source /env/bin/activate.
-ENV VIRTUAL_ENV /env
-ENV PATH /env/bin:$PATH
-
-RUN apt-get update && apt-get install -y python-opencv
-
-# Copy the application's requirements.txt and run pip to install all
-# dependencies into the virtualenv.
-ADD requirements.txt /app/requirements.txt
-RUN pip install -r /app/requirements.txt
-
-# Add the application source code.
-ADD . /app
-
-# Run a WSGI server to serve the application. gunicorn must be declared as
-# a dependency in requirements.txt.
-CMD gunicorn -b :$PORT main:app
diff --git a/cv/detection/yolov5/pytorch/utils/google_app_engine/additional_requirements.txt b/cv/detection/yolov5/pytorch/utils/google_app_engine/additional_requirements.txt
deleted file mode 100644
index 2f81c8b40056cb622b87bd5551581e264a78992d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/google_app_engine/additional_requirements.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-# add these requirements in your app on top of the existing ones
-pip==19.2
-Flask==1.0.2
-gunicorn==19.9.0
diff --git a/cv/detection/yolov5/pytorch/utils/google_app_engine/app.yaml b/cv/detection/yolov5/pytorch/utils/google_app_engine/app.yaml
deleted file mode 100644
index ac29d104b144abd634482b35282725d694e84a2b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/google_app_engine/app.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-runtime: custom
-env: flex
-
-service: yolov5app
-
-liveness_check:
- initial_delay_sec: 600
-
-manual_scaling:
- instances: 1
-resources:
- cpu: 1
- memory_gb: 4
- disk_size_gb: 20
\ No newline at end of file
diff --git a/cv/detection/yolov5/pytorch/utils/google_utils.py b/cv/detection/yolov5/pytorch/utils/google_utils.py
deleted file mode 100644
index aa5c455146d619c359500acad3eebedf8f9384c2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/google_utils.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Google utils: https://cloud.google.com/storage/docs/reference/libraries
-
-import os
-import platform
-import subprocess
-import time
-import urllib
-from pathlib import Path
-
-import requests
-import torch
-
-
-def gsutil_getsize(url=''):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
- return eval(s.split(' ')[0]) if len(s) else 0 # bytes
-
-
-def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''):
- # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
- file = Path(file)
- assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
- try: # url1
- print(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, str(file))
- assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check
- except Exception as e: # url2
- file.unlink(missing_ok=True) # remove partial downloads
- print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...')
- os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail
- finally:
- if not file.exists() or file.stat().st_size < min_bytes: # check
- file.unlink(missing_ok=True) # remove partial downloads
- print(f"ERROR: {assert_msg}\n{error_msg}")
- print('')
-
-
-def attempt_download(file, repo='ultralytics/yolov5'): # from utils.google_utils import *; attempt_download()
- # Attempt file download if does not exist
- file = Path(str(file).strip().replace("'", ''))
-
- if not file.exists():
- # URL specified
- name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc.
- if str(file).startswith(('http:/', 'https:/')): # download
- url = str(file).replace(':/', '://') # Pathlib turns :// -> :/
- name = name.split('?')[0] # parse authentication https://url.com/file.txt?auth...
- safe_download(file=name, url=url, min_bytes=1E5)
- return name
-
- # GitHub assets
- file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required)
- try:
- response = requests.get(f'https://api.github.com/repos/{repo}/releases/tags/v5.0').json() # github api
- assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...]
- tag = response['tag_name'] # i.e. 'v1.0'
- except: # fallback plan
- assets = ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt',
- 'yolov5s6.pt', 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt']
- try:
- tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1]
- except:
- tag = 'v5.0' # current release
-
- if name in assets:
- safe_download(file,
- url=f'https://github.com/{repo}/releases/download/{tag}/{name}',
- # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional)
- min_bytes=1E5,
- error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/')
-
- return str(file)
-
-
-def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'):
- # Downloads a file from Google Drive. from yolov5.utils.google_utils import *; gdrive_download()
- t = time.time()
- file = Path(file)
- cookie = Path('cookie') # gdrive cookie
- print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
- file.unlink(missing_ok=True) # remove existing file
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Attempt file download
- out = "NUL" if platform.system() == "Windows" else "/dev/null"
- os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
- if os.path.exists('cookie'): # large file
- s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
- else: # small file
- s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
- r = os.system(s) # execute, capture return
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Error check
- if r != 0:
- file.unlink(missing_ok=True) # remove partial
- print('Download error ') # raise Exception('Download error')
- return r
-
- # Unzip if archive
- if file.suffix == '.zip':
- print('unzipping... ', end='')
- os.system(f'unzip -q {file}') # unzip
- file.unlink() # remove zip to free space
-
- print(f'Done ({time.time() - t:.1f}s)')
- return r
-
-
-def get_token(cookie="./cookie"):
- with open(cookie) as f:
- for line in f:
- if "download" in line:
- return line.split()[-1]
- return ""
-
-# def upload_blob(bucket_name, source_file_name, destination_blob_name):
-# # Uploads a file to a bucket
-# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
-#
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(destination_blob_name)
-#
-# blob.upload_from_filename(source_file_name)
-#
-# print('File {} uploaded to {}.'.format(
-# source_file_name,
-# destination_blob_name))
-#
-#
-# def download_blob(bucket_name, source_blob_name, destination_file_name):
-# # Uploads a blob from a bucket
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(source_blob_name)
-#
-# blob.download_to_filename(destination_file_name)
-#
-# print('Blob {} downloaded to {}.'.format(
-# source_blob_name,
-# destination_file_name))
diff --git a/cv/detection/yolov5/pytorch/utils/loss.py b/cv/detection/yolov5/pytorch/utils/loss.py
deleted file mode 100644
index 370f7323e54e8cd01e8fd7cef9708fc192bce3c2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/loss.py
+++ /dev/null
@@ -1,226 +0,0 @@
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-
-# Loss functions
-
-import torch
-import torch.nn as nn
-
-from utils.metrics import bbox_iou
-from utils.torch_utils import is_parallel
-
-
-def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
- # return positive, negative label smoothing BCE targets
- return 1.0 - 0.5 * eps, 0.5 * eps
-
-
-class BCEBlurWithLogitsLoss(nn.Module):
- # BCEwithLogitLoss() with reduced missing label effects.
- def __init__(self, alpha=0.05):
- super(BCEBlurWithLogitsLoss, self).__init__()
- self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
- self.alpha = alpha
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- pred = torch.sigmoid(pred) # prob from logits
- dx = pred - true # reduce only missing label effects
- # dx = (pred - true).abs() # reduce missing label and false label effects
- alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
- loss *= alpha_factor
- return loss.mean()
-
-
-class FocalLoss(nn.Module):
- # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(FocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- # p_t = torch.exp(-loss)
- # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
-
- # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
- pred_prob = torch.sigmoid(pred) # prob from logits
- p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = (1.0 - p_t) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class QFocalLoss(nn.Module):
- # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(QFocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
-
- pred_prob = torch.sigmoid(pred) # prob from logits
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = torch.abs(true - pred_prob) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class ComputeLoss:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLoss, self).__init__()
- self.sort_obj_iou = False
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors':
- setattr(self, k, getattr(det, k))
-
- def __call__(self, p, targets): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- pxy = ps[:, :2].sigmoid() * 2. - 0.5
- pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- score_iou = iou.detach().clamp(0).type(tobj.dtype)
- if self.sort_obj_iou:
- sort_id = torch.argsort(score_iou)
- b, a, gj, gi, score_iou = b[sort_id], a[sort_id], gj[sort_id], gi[sort_id], score_iou[sort_id]
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * score_iou # iou ratio
-
- # Classification
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
- t[range(n), tcls[i]] = self.cp
- lcls += self.BCEcls(ps[:, 5:], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., 4], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- tcls, tbox, indices, anch = [], [], [], []
- gain = torch.ones(7, device=targets.device) # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- # anchors = self.anchors[i]
- anchors, shape = self.anchors[i], p[i].shape
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- # indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- indices.append((b,a,gj.clamp_(0,shape[2] - 1),gi.clamp_(0,shape[3] - 1))) # image, anchor, grid indices
- tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
- anch.append(anchors[a]) # anchors
- tcls.append(c) # class
-
- return tcls, tbox, indices, anch
diff --git a/cv/detection/yolov5/pytorch/utils/metrics.py b/cv/detection/yolov5/pytorch/utils/metrics.py
deleted file mode 100644
index c94c4a76a96457084cb1ff346af181055bd2aa42..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/metrics.py
+++ /dev/null
@@ -1,327 +0,0 @@
-# Model validation metrics
-
-import warnings
-from pathlib import Path
-
-import math
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = (target_cls == c).sum() # number of labels
- n_p = i.sum() # number of predictions
-
- if n_p == 0 or n_l == 0:
- continue
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + 1e-16) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + 1e-16)
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
-
- i = f1.mean(0).argmax() # max F1 index
- return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
- mpre = np.concatenate(([1.], precision, [0.]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(np.int16)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[detection_classes[m1[j]], gc] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # background FP
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # background FN
-
- def matrix(self):
- return self.matrix
-
- def plot(self, normalize=True, save_dir='', names=()):
- try:
- import seaborn as sn
-
- array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-6) if normalize else 1) # normalize columns
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig = plt.figure(figsize=(12, 9), tight_layout=True)
- sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
- labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered
- sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
- xticklabels=names + ['background FP'] if labels else "auto",
- yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
- fig.axes[0].set_xlabel('True')
- fig.axes[0].set_ylabel('Predicted')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- except Exception as e:
- print(f'WARNING: ConfusionMatrix plot failure: {e}')
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def bbox_ioa(box1, box2, eps=1E-7):
- """ Returns the intersection over box2 area given box1, box2. Boxes are x1y1x2y2
- box1: np.array of shape(4)
- box2: np.array of shape(nx4)
- returns: np.array of shape(n)
- """
-
- box2 = box2.transpose()
-
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
-
- # Intersection area
- inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
- (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
-
- # box2 area
- box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + eps
-
- # Intersection over box2 area
- return inter_area / box2_area
-
-
-def wh_iou(wh1, wh2):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-
-def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = py.mean(0)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
diff --git a/cv/detection/yolov5/pytorch/utils/plots.py b/cv/detection/yolov5/pytorch/utils/plots.py
deleted file mode 100644
index 23a48620e6b5732894194ae5852ce0a3d727151e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/plots.py
+++ /dev/null
@@ -1,472 +0,0 @@
-# Plotting utils
-
-import glob
-import os
-from copy import copy
-from pathlib import Path
-
-import cv2
-import math
-import matplotlib
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-import seaborn as sn
-import torch
-import yaml
-from PIL import Image, ImageDraw, ImageFont
-
-from utils.general import increment_path, xywh2xyxy, xyxy2xywh
-from utils.metrics import fitness
-
-# Settings
-matplotlib.rc('font', **{'size': 11})
-matplotlib.use('Agg') # for writing to files only
-
-
-class Colors:
- # Ultralytics color palette https://ultralytics.com/
- def __init__(self):
- # hex = matplotlib.colors.TABLEAU_COLORS.values()
- hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',
- '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')
- self.palette = [self.hex2rgb('#' + c) for c in hex]
- self.n = len(self.palette)
-
- def __call__(self, i, bgr=False):
- c = self.palette[int(i) % self.n]
- return (c[2], c[1], c[0]) if bgr else c
-
- @staticmethod
- def hex2rgb(h): # rgb order (PIL)
- return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))
-
-
-colors = Colors() # create instance for 'from utils.plots import colors'
-
-
-def hist2d(x, y, n=100):
- # 2d histogram used in labels.png and evolve.png
- xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n)
- hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges))
- xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1)
- yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1)
- return np.log(hist[xidx, yidx])
-
-
-def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
- from scipy.signal import butter, filtfilt
-
- # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy
- def butter_lowpass(cutoff, fs, order):
- nyq = 0.5 * fs
- normal_cutoff = cutoff / nyq
- return butter(order, normal_cutoff, btype='low', analog=False)
-
- b, a = butter_lowpass(cutoff, fs, order=order)
- return filtfilt(b, a, data) # forward-backward filter
-
-
-def plot_one_box(x, im, color=(128, 128, 128), label=None, line_thickness=3):
- # Plots one bounding box on image 'im' using OpenCV
- assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to plot_on_box() input image.'
- tl = line_thickness or round(0.002 * (im.shape[0] + im.shape[1]) / 2) + 1 # line/font thickness
- c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
- cv2.rectangle(im, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
- if label:
- tf = max(tl - 1, 1) # font thickness
- t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
- c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3
- cv2.rectangle(im, c1, c2, color, -1, cv2.LINE_AA) # filled
- cv2.putText(im, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)
-
-
-def plot_one_box_PIL(box, im, color=(128, 128, 128), label=None, line_thickness=None):
- # Plots one bounding box on image 'im' using PIL
- im = Image.fromarray(im)
- draw = ImageDraw.Draw(im)
- line_thickness = line_thickness or max(int(min(im.size) / 200), 2)
- draw.rectangle(box, width=line_thickness, outline=color) # plot
- if label:
- font = ImageFont.truetype("Arial.ttf", size=max(round(max(im.size) / 40), 12))
- txt_width, txt_height = font.getsize(label)
- draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=color)
- draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font)
- return np.asarray(im)
-
-
-def plot_wh_methods(): # from utils.plots import *; plot_wh_methods()
- # Compares the two methods for width-height anchor multiplication
- # https://github.com/ultralytics/yolov3/issues/168
- x = np.arange(-4.0, 4.0, .1)
- ya = np.exp(x)
- yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2
-
- fig = plt.figure(figsize=(6, 3), tight_layout=True)
- plt.plot(x, ya, '.-', label='YOLOv3')
- plt.plot(x, yb ** 2, '.-', label='YOLOv5 ^2')
- plt.plot(x, yb ** 1.6, '.-', label='YOLOv5 ^1.6')
- plt.xlim(left=-4, right=4)
- plt.ylim(bottom=0, top=6)
- plt.xlabel('input')
- plt.ylabel('output')
- plt.grid()
- plt.legend()
- fig.savefig('comparison.png', dpi=200)
-
-
-def output_to_target(output):
- # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
- targets = []
- for i, o in enumerate(output):
- for *box, conf, cls in o.cpu().numpy():
- targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf])
- return np.array(targets)
-
-
-def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16):
- # Plot image grid with labels
-
- if isinstance(images, torch.Tensor):
- images = images.cpu().float().numpy()
- if isinstance(targets, torch.Tensor):
- targets = targets.cpu().numpy()
-
- # un-normalise
- if np.max(images[0]) <= 1:
- images *= 255
-
- tl = 3 # line thickness
- tf = max(tl - 1, 1) # font thickness
- bs, _, h, w = images.shape # batch size, _, height, width
- bs = min(bs, max_subplots) # limit plot images
- ns = np.ceil(bs ** 0.5) # number of subplots (square)
-
- # Check if we should resize
- scale_factor = max_size / max(h, w)
- if scale_factor < 1:
- h = math.ceil(scale_factor * h)
- w = math.ceil(scale_factor * w)
-
- mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
- for i, img in enumerate(images):
- if i == max_subplots: # if last batch has fewer images than we expect
- break
-
- block_x = int(w * (i // ns))
- block_y = int(h * (i % ns))
-
- img = img.transpose(1, 2, 0)
- if scale_factor < 1:
- img = cv2.resize(img, (w, h))
-
- mosaic[block_y:block_y + h, block_x:block_x + w, :] = img
- if len(targets) > 0:
- image_targets = targets[targets[:, 0] == i]
- boxes = xywh2xyxy(image_targets[:, 2:6]).T
- classes = image_targets[:, 1].astype('int')
- labels = image_targets.shape[1] == 6 # labels if no conf column
- conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred)
-
- if boxes.shape[1]:
- if boxes.max() <= 1.01: # if normalized with tolerance 0.01
- boxes[[0, 2]] *= w # scale to pixels
- boxes[[1, 3]] *= h
- elif scale_factor < 1: # absolute coords need scale if image scales
- boxes *= scale_factor
- boxes[[0, 2]] += block_x
- boxes[[1, 3]] += block_y
- for j, box in enumerate(boxes.T):
- cls = int(classes[j])
- color = colors(cls)
- cls = names[cls] if names else cls
- if labels or conf[j] > 0.25: # 0.25 conf thresh
- label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j])
- plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl)
-
- # Draw image filename labels
- if paths:
- label = Path(paths[i]).name[:40] # trim to 40 char
- t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
- cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf,
- lineType=cv2.LINE_AA)
-
- # Image border
- cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3)
-
- if fname:
- r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size
- mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA)
- # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save
- Image.fromarray(mosaic).save(fname) # PIL save
- return mosaic
-
-
-def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
- # Plot LR simulating training for full epochs
- optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals
- y = []
- for _ in range(epochs):
- scheduler.step()
- y.append(optimizer.param_groups[0]['lr'])
- plt.plot(y, '.-', label='LR')
- plt.xlabel('epoch')
- plt.ylabel('LR')
- plt.grid()
- plt.xlim(0, epochs)
- plt.ylim(0)
- plt.savefig(Path(save_dir) / 'LR.png', dpi=200)
- plt.close()
-
-
-def plot_test_txt(): # from utils.plots import *; plot_test()
- # Plot test.txt histograms
- x = np.loadtxt('test.txt', dtype=np.float32)
- box = xyxy2xywh(x[:, :4])
- cx, cy = box[:, 0], box[:, 1]
-
- fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True)
- ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0)
- ax.set_aspect('equal')
- plt.savefig('hist2d.png', dpi=300)
-
- fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True)
- ax[0].hist(cx, bins=600)
- ax[1].hist(cy, bins=600)
- plt.savefig('hist1d.png', dpi=200)
-
-
-def plot_targets_txt(): # from utils.plots import *; plot_targets_txt()
- # Plot targets.txt histograms
- x = np.loadtxt('targets.txt', dtype=np.float32).T
- s = ['x targets', 'y targets', 'width targets', 'height targets']
- fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)
- ax = ax.ravel()
- for i in range(4):
- ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std()))
- ax[i].legend()
- ax[i].set_title(s[i])
- plt.savefig('targets.jpg', dpi=200)
-
-
-def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt()
- # Plot study.txt generated by test.py
- plot2 = False # plot additional results
- if plot2:
- ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel()
-
- fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
- # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]:
- for f in sorted(Path(path).glob('study*.txt')):
- y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
- x = np.arange(y.shape[1]) if x is None else np.array(x)
- if plot2:
- s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_preprocess (ms/img)', 't_inference (ms/img)', 't_NMS (ms/img)']
- for i in range(7):
- ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8)
- ax[i].set_title(s[i])
-
- j = y[3].argmax() + 1
- ax2.plot(y[5, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8,
- label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
-
- ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
- 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')
-
- ax2.grid(alpha=0.2)
- ax2.set_yticks(np.arange(20, 60, 5))
- ax2.set_xlim(0, 57)
- ax2.set_ylim(30, 55)
- ax2.set_xlabel('GPU Speed (ms/img)')
- ax2.set_ylabel('COCO AP val')
- ax2.legend(loc='lower right')
- plt.savefig(str(Path(path).name) + '.png', dpi=300)
-
-
-def plot_labels(labels, names=(), save_dir=Path(''), loggers=None):
- # plot dataset labels
- print('Plotting labels... ')
- c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes
- nc = int(c.max() + 1) # number of classes
- x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height'])
-
- # seaborn correlogram
- sn.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9))
- plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200)
- plt.close()
-
- # matplotlib labels
- matplotlib.use('svg') # faster
- ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()
- y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)
- # [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # update colors bug #3195
- ax[0].set_ylabel('instances')
- if 0 < len(names) < 30:
- ax[0].set_xticks(range(len(names)))
- ax[0].set_xticklabels(names, rotation=90, fontsize=10)
- else:
- ax[0].set_xlabel('classes')
- sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9)
- sn.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9)
-
- # rectangles
- labels[:, 1:3] = 0.5 # center
- labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000
- img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255)
- for cls, *box in labels[:1000]:
- ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot
- ax[1].imshow(img)
- ax[1].axis('off')
-
- for a in [0, 1, 2, 3]:
- for s in ['top', 'right', 'left', 'bottom']:
- ax[a].spines[s].set_visible(False)
-
- plt.savefig(save_dir / 'labels.jpg', dpi=200)
- matplotlib.use('Agg')
- plt.close()
-
- # loggers
- for k, v in loggers.items() or {}:
- if k == 'wandb' and v:
- v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False)
-
-
-def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution()
- # Plot hyperparameter evolution results in evolve.txt
- with open(yaml_file) as f:
- hyp = yaml.safe_load(f)
- x = np.loadtxt('evolve.txt', ndmin=2)
- f = fitness(x)
- # weights = (f - f.min()) ** 2 # for weighted results
- plt.figure(figsize=(10, 12), tight_layout=True)
- matplotlib.rc('font', **{'size': 8})
- for i, (k, v) in enumerate(hyp.items()):
- y = x[:, i + 7]
- # mu = (y * weights).sum() / weights.sum() # best weighted result
- mu = y[f.argmax()] # best single result
- plt.subplot(6, 5, i + 1)
- plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none')
- plt.plot(mu, f.max(), 'k+', markersize=15)
- plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters
- if i % 5 != 0:
- plt.yticks([])
- print('%15s: %.3g' % (k, mu))
- plt.savefig('evolve.png', dpi=200)
- print('\nPlot saved as evolve.png')
-
-
-def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
- # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection()
- ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel()
- s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS']
- files = list(Path(save_dir).glob('frames*.txt'))
- for fi, f in enumerate(files):
- try:
- results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows
- n = results.shape[1] # number of rows
- x = np.arange(start, min(stop, n) if stop else n)
- results = results[:, x]
- t = (results[0] - results[0].min()) # set t0=0s
- results[0] = x
- for i, a in enumerate(ax):
- if i < len(results):
- label = labels[fi] if len(labels) else f.stem.replace('frames_', '')
- a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5)
- a.set_title(s[i])
- a.set_xlabel('time (s)')
- # if fi == len(files) - 1:
- # a.set_ylim(bottom=0)
- for side in ['top', 'right']:
- a.spines[side].set_visible(False)
- else:
- a.remove()
- except Exception as e:
- print('Warning: Plotting error for %s; %s' % (f, e))
-
- ax[1].legend()
- plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200)
-
-
-def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay()
- # Plot training 'results*.txt', overlaying train and val losses
- s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends
- t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles
- for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')):
- results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T
- n = results.shape[1] # number of rows
- x = range(start, min(stop, n) if stop else n)
- fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True)
- ax = ax.ravel()
- for i in range(5):
- for j in [i, i + 5]:
- y = results[j, x]
- ax[i].plot(x, y, marker='.', label=s[j])
- # y_smooth = butter_lowpass_filtfilt(y)
- # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j])
-
- ax[i].set_title(t[i])
- ax[i].legend()
- ax[i].set_ylabel(f) if i == 0 else None # add filename
- fig.savefig(f.replace('.txt', '.png'), dpi=200)
-
-
-def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''):
- # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp')
- fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True)
- ax = ax.ravel()
- s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall',
- 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95']
- if bucket:
- # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id]
- files = ['results%g.txt' % x for x in id]
- c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id)
- os.system(c)
- else:
- files = list(Path(save_dir).glob('results*.txt'))
- assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir)
- for fi, f in enumerate(files):
- try:
- results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T
- n = results.shape[1] # number of rows
- x = range(start, min(stop, n) if stop else n)
- for i in range(10):
- y = results[i, x]
- if i in [0, 1, 2, 5, 6, 7]:
- y[y == 0] = np.nan # don't show zero loss values
- # y /= y[0] # normalize
- label = labels[fi] if len(labels) else f.stem
- ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8)
- ax[i].set_title(s[i])
- # if i in [5, 6, 7]: # share train and val loss y axes
- # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
- except Exception as e:
- print('Warning: Plotting error for %s; %s' % (f, e))
-
- ax[1].legend()
- fig.savefig(Path(save_dir) / 'results.png', dpi=200)
-
-
-def feature_visualization(x, module_type, stage, n=64, save_dir=Path('runs/detect/exp')):
- """
- x: Features to be visualized
- module_type: Module type
- stage: Module stage within model
- n: Maximum number of feature maps to plot
- save_dir: Directory to save results
- """
- if 'Detect' not in module_type:
- batch, channels, height, width = x.shape # batch, channels, height, width
- if height > 1 and width > 1:
- f = f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename
-
- plt.figure(tight_layout=True)
- blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels
- n = min(n, channels) # number of plots
- ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True)[1].ravel() # 8 rows x n/8 cols
- for i in range(n):
- ax[i].imshow(blocks[i].squeeze()) # cmap='gray'
- ax[i].axis('off')
-
- print(f'Saving {save_dir / f}... ({n}/{channels})')
- plt.savefig(save_dir / f, dpi=300)
diff --git a/cv/detection/yolov5/pytorch/utils/torch_utils.py b/cv/detection/yolov5/pytorch/utils/torch_utils.py
deleted file mode 100644
index 36b6845a8c480299433e42a7da0ab138fc18fb18..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/torch_utils.py
+++ /dev/null
@@ -1,312 +0,0 @@
-# YOLOv5 PyTorch utils
-
-import datetime
-import logging
-import os
-import platform
-import subprocess
-import time
-from contextlib import contextmanager
-from copy import deepcopy
-from pathlib import Path
-
-import math
-import torch
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-logger = logging.getLogger(__name__)
-
-
-@contextmanager
-def torch_distributed_zero_first(local_rank: int):
- """
- Decorator to make all processes in distributed training wait for each local_master to do something.
- """
- if local_rank not in [-1, 0]:
- dist.barrier()
- yield
- if local_rank == 0:
- dist.barrier()
-
-
-def init_torch_seeds(seed=0):
- # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
- torch.manual_seed(seed)
- if seed == 0: # slower, more reproducible
- cudnn.benchmark, cudnn.deterministic = False, True
- else: # faster, less reproducible
- cudnn.benchmark, cudnn.deterministic = True, False
-
-
-def date_modified(path=__file__):
- # return human-readable file modification date, i.e. '2021-3-26'
- t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
- return f'{t.year}-{t.month}-{t.day}'
-
-
-def git_describe(path=Path(__file__).parent): # path must be a directory
- # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
- s = f'git -C {path} describe --tags --long --always'
- try:
- return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
- except subprocess.CalledProcessError as e:
- return '' # not a git repository
-
-
-def select_device(device='', batch_size=None):
- # device = 'cpu' or '0' or '0,1,2,3'
- s = f'YOLOv5 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
- device = str(device).strip().lower().replace('cuda:', '') # to string, 'cuda:0' to '0'
- cpu = device == 'cpu'
- if cpu:
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
- elif device: # non-cpu device requested
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
- assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
-
- cuda = not cpu and torch.cuda.is_available()
- if cuda:
- devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7
- n = len(devices) # device count
- if n > 1 and batch_size: # check batch_size is divisible by device_count
- assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
- space = ' ' * (len(s) + 1)
- for i, d in enumerate(devices):
- p = torch.cuda.get_device_properties(i)
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB
- else:
- s += 'CPU\n'
-
- logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
- return torch.device('cuda:0' if cuda else 'cpu')
-
-
-def time_synchronized():
- # pytorch-accurate time
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- return time.time()
-
-
-def profile(x, ops, n=100, device=None):
- # profile a pytorch module or list of modules. Example usage:
- # x = torch.randn(16, 3, 640, 640) # input
- # m1 = lambda x: x * torch.sigmoid(x)
- # m2 = nn.SiLU()
- # profile(x, [m1, m2], n=100) # profile speed over 100 iterations
-
- device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
- x = x.to(device)
- x.requires_grad = True
- print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
- print(f"\n{'Params':>12s}{'GFLOPs':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}")
- for m in ops if isinstance(ops, list) else [ops]:
- m = m.to(device) if hasattr(m, 'to') else m # device
- m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type
- dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward
- try:
- flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs
- except:
- flops = 0
-
- for _ in range(n):
- t[0] = time_synchronized()
- y = m(x)
- t[1] = time_synchronized()
- try:
- _ = y.sum().backward()
- t[2] = time_synchronized()
- except: # no backward method
- t[2] = float('nan')
- dtf += (t[1] - t[0]) * 1000 / n # ms per op forward
- dtb += (t[2] - t[1]) * 1000 / n # ms per op backward
-
- s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
- s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
- p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
- print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}')
-
-
-def is_parallel(model):
- # Returns True if model is of type DP or DDP
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
-
-
-def de_parallel(model):
- # De-parallelize a model: returns single-GPU model if model is of type DP or DDP
- return model.module if is_parallel(model) else model
-
-
-def intersect_dicts(da, db, exclude=()):
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
- return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
-
-
-def initialize_weights(model):
- for m in model.modules():
- t = type(m)
- if t is nn.Conv2d:
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif t is nn.BatchNorm2d:
- m.eps = 1e-3
- m.momentum = 0.03
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
- m.inplace = True
-
-
-def find_modules(model, mclass=nn.Conv2d):
- # Finds layer indices matching module class 'mclass'
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
-
-
-def sparsity(model):
- # Return global model sparsity
- a, b = 0., 0.
- for p in model.parameters():
- a += p.numel()
- b += (p == 0).sum()
- return b / a
-
-
-def prune(model, amount=0.3):
- # Prune model to requested global sparsity
- import torch.nn.utils.prune as prune
- print('Pruning model... ', end='')
- for name, m in model.named_modules():
- if isinstance(m, nn.Conv2d):
- prune.l1_unstructured(m, name='weight', amount=amount) # prune
- prune.remove(m, 'weight') # make permanent
- print(' %.3g global sparsity' % sparsity(model))
-
-
-def fuse_conv_and_bn(conv, bn):
- # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
- fusedconv = nn.Conv2d(conv.in_channels,
- conv.out_channels,
- kernel_size=conv.kernel_size,
- stride=conv.stride,
- padding=conv.padding,
- groups=conv.groups,
- bias=True).requires_grad_(False).to(conv.weight.device)
-
- # prepare filters
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
-
- # prepare spatial bias
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
-
- return fusedconv
-
-
-def model_info(model, verbose=False, img_size=640):
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
- if verbose:
- print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
- for i, (name, p) in enumerate(model.named_parameters()):
- name = name.replace('module_list.', '')
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
-
- try: # FLOPs
- from thop import profile
- stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
- img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
- flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs
- img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
- fs = ', %.1f GFLOPs' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPs
- except (ImportError, Exception):
- fs = ''
-
- logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
-
-
-def load_classifier(name='resnet101', n=2):
- # Loads a pretrained model reshaped to n-class output
- model = torchvision.models.__dict__[name](pretrained=True)
-
- # ResNet model properties
- # input_size = [3, 224, 224]
- # input_space = 'RGB'
- # input_range = [0, 1]
- # mean = [0.485, 0.456, 0.406]
- # std = [0.229, 0.224, 0.225]
-
- # Reshape output to n classes
- filters = model.fc.weight.shape[1]
- model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
- model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
- model.fc.out_features = n
- return model
-
-
-def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
- # scales img(bs,3,y,x) by ratio constrained to gs-multiple
- if ratio == 1.0:
- return img
- else:
- h, w = img.shape[2:]
- s = (int(h * ratio), int(w * ratio)) # new size
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
- if not same_shape: # pad/crop img
- h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
-
-
-def copy_attr(a, b, include=(), exclude=()):
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
- for k, v in b.__dict__.items():
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
- continue
- else:
- setattr(a, k, v)
-
-
-class ModelEMA:
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
- Keep a moving average of everything in the model state_dict (parameters and buffers).
- This is intended to allow functionality like
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
- A smoothed version of the weights is necessary for some training schemes to perform well.
- This class is sensitive where it is initialized in the sequence of model init,
- GPU assignment and distributed training wrappers.
- """
-
- def __init__(self, model, decay=0.9999, updates=0):
- # Create EMA
- self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
- # if next(model.parameters()).device.type != 'cpu':
- # self.ema.half() # FP16 EMA
- self.updates = updates # number of EMA updates
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
- for p in self.ema.parameters():
- p.requires_grad_(False)
-
- def update(self, model):
- # Update EMA parameters
- with torch.no_grad():
- self.updates += 1
- d = self.decay(self.updates)
-
- msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
- for k, v in self.ema.state_dict().items():
- if v.dtype.is_floating_point:
- v *= d
- v += (1. - d) * msd[k].detach()
-
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
- # Update EMA attributes
- copy_attr(self.ema, model, include, exclude)
diff --git a/cv/detection/yolov5/pytorch/utils/wandb_logging/__init__.py b/cv/detection/yolov5/pytorch/utils/wandb_logging/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov5/pytorch/utils/wandb_logging/log_dataset.py b/cv/detection/yolov5/pytorch/utils/wandb_logging/log_dataset.py
deleted file mode 100644
index 3a9a3d79fe014b63f5726d78ff2003a792d16984..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/wandb_logging/log_dataset.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import argparse
-
-import yaml
-
-from wandb_utils import WandbLogger
-
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def create_dataset_artifact(opt):
- with open(opt.data) as f:
- data = yaml.safe_load(f) # data dict
- logger = WandbLogger(opt, '', None, data, job_type='Dataset Creation')
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--data', type=str, default='data/coco128.yaml', help='data.yaml path')
- parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
- parser.add_argument('--project', type=str, default='YOLOv5', help='name of W&B Project')
- parser.add_argument('--entity', default=None, help='W&B entity')
-
- opt = parser.parse_args()
- opt.resume = False # Explicitly disallow resume check for dataset upload job
-
- create_dataset_artifact(opt)
diff --git a/cv/detection/yolov5/pytorch/utils/wandb_logging/wandb_utils.py b/cv/detection/yolov5/pytorch/utils/wandb_logging/wandb_utils.py
deleted file mode 100644
index 45aa088fa5e05538ecbab2362457cb38d369f4cf..0000000000000000000000000000000000000000
--- a/cv/detection/yolov5/pytorch/utils/wandb_logging/wandb_utils.py
+++ /dev/null
@@ -1,350 +0,0 @@
-# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-
-"""Utilities and tools for tracking runs with Weights & Biases."""
-import logging
-import os
-import sys
-from contextlib import contextmanager
-from pathlib import Path
-
-import yaml
-from tqdm import tqdm
-
-sys.path.append(str(Path(__file__).parent.parent.parent)) # add utils/ to path
-from utils.datasets import LoadImagesAndLabels
-from utils.datasets import img2label_paths
-from utils.general import colorstr, check_dataset, check_file
-
-try:
- import wandb
- from wandb import init, finish
-except ImportError:
- wandb = None
-wandb = None
-RANK = int(os.getenv('RANK', -1))
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
- return from_string[len(prefix):]
-
-
-def check_wandb_config_file(data_config_file):
- wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path
- if Path(wandb_config).is_file():
- return wandb_config
- return data_config_file
-
-
-def get_run_info(run_path):
- run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX))
- run_id = run_path.stem
- project = run_path.parent.stem
- entity = run_path.parent.parent.stem
- model_artifact_name = 'run_' + run_id + '_model'
- return entity, project, run_id, model_artifact_name
-
-
-def check_wandb_resume(opt):
- process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None
- if isinstance(opt.resume, str):
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- if RANK not in [-1, 0]: # For resuming DDP runs
- entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
- api = wandb.Api()
- artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest')
- modeldir = artifact.download()
- opt.weights = str(Path(modeldir) / "last.pt")
- return True
- return None
-
-
-def process_wandb_config_ddp_mode(opt):
- with open(check_file(opt.data)) as f:
- data_dict = yaml.safe_load(f) # data dict
- train_dir, val_dir = None, None
- if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias)
- train_dir = train_artifact.download()
- train_path = Path(train_dir) / 'data/images/'
- data_dict['train'] = str(train_path)
-
- if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias)
- val_dir = val_artifact.download()
- val_path = Path(val_dir) / 'data/images/'
- data_dict['val'] = str(val_path)
- if train_dir or val_dir:
- ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml')
- with open(ddp_data_path, 'w') as f:
- yaml.safe_dump(data_dict, f)
- opt.data = ddp_data_path
-
-
-class WandbLogger():
- """Log training runs, datasets, models, and predictions to Weights & Biases.
-
- This logger sends information to W&B at wandb.ai. By default, this information
- includes hyperparameters, system configuration and metrics, model metrics,
- and basic data metrics and analyses.
-
- By providing additional command line arguments to train.py, datasets,
- models and predictions can also be logged.
-
- For more on how this logger is used, see the Weights & Biases documentation:
- https://docs.wandb.com/guides/integrations/yolov5
- """
-
- def __init__(self, opt, name, run_id, data_dict, job_type='Training'):
- # Pre-training routine --
- self.job_type = job_type
- self.wandb, self.wandb_run, self.data_dict = wandb, None if not wandb else wandb.run, data_dict
- # It's more elegant to stick to 1 wandb.init call, but useful config data is overwritten in the WandbLogger's wandb.init call
- if isinstance(opt.resume, str): # checks resume from artifact
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- entity, project, run_id, model_artifact_name = get_run_info(opt.resume)
- model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name
- assert wandb, 'install wandb to resume wandb runs'
- # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config
- self.wandb_run = wandb.init(id=run_id,
- project=project,
- entity=entity,
- resume='allow',
- allow_val_change=True)
- opt.resume = model_artifact_name
- elif self.wandb:
- self.wandb_run = wandb.init(config=opt,
- resume="allow",
- project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem,
- entity=opt.entity,
- name=name,
- job_type=job_type,
- id=run_id,
- allow_val_change=True) if not wandb.run else wandb.run
- if self.wandb_run:
- if self.job_type == 'Training':
- if not opt.resume:
- wandb_data_dict = self.check_and_upload_dataset(opt) if opt.upload_dataset else data_dict
- # Info useful for resuming from artifacts
- self.wandb_run.config.update({'opt': vars(opt), 'data_dict': data_dict}, allow_val_change=True)
- self.data_dict = self.setup_training(opt, data_dict)
- if self.job_type == 'Dataset Creation':
- self.data_dict = self.check_and_upload_dataset(opt)
- else:
- prefix = colorstr('wandb: ')
- print(f"{prefix}Install Weights & Biases for YOLOv5 logging with 'pip install wandb' (recommended)")
-
- def check_and_upload_dataset(self, opt):
- assert wandb, 'Install wandb to upload dataset'
- config_path = self.log_dataset_artifact(check_file(opt.data),
- opt.single_cls,
- 'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem)
- print("Created dataset config file ", config_path)
- with open(config_path) as f:
- wandb_data_dict = yaml.safe_load(f)
- return wandb_data_dict
-
- def setup_training(self, opt, data_dict):
- self.log_dict, self.current_epoch, self.log_imgs = {}, 0, 16 # Logging Constants
- self.bbox_interval = opt.bbox_interval
- if isinstance(opt.resume, str):
- modeldir, _ = self.download_model_artifact(opt)
- if modeldir:
- self.weights = Path(modeldir) / "last.pt"
- config = self.wandb_run.config
- opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp = str(
- self.weights), config.save_period, config.total_batch_size, config.bbox_interval, config.epochs, \
- config.opt['hyp']
- data_dict = dict(self.wandb_run.config.data_dict) # eliminates the need for config file to resume
- if 'val_artifact' not in self.__dict__: # If --upload_dataset is set, use the existing artifact, don't download
- self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'),
- opt.artifact_alias)
- self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(data_dict.get('val'),
- opt.artifact_alias)
- self.result_artifact, self.result_table, self.val_table, self.weights = None, None, None, None
- if self.train_artifact_path is not None:
- train_path = Path(self.train_artifact_path) / 'data/images/'
- data_dict['train'] = str(train_path)
- if self.val_artifact_path is not None:
- val_path = Path(self.val_artifact_path) / 'data/images/'
- data_dict['val'] = str(val_path)
- self.val_table = self.val_artifact.get("val")
- self.map_val_table_path()
- wandb.log({"validation dataset": self.val_table})
-
- if self.val_artifact is not None:
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
- self.result_table = wandb.Table(["epoch", "id", "ground truth", "prediction", "avg_confidence"])
- if opt.bbox_interval == -1:
- self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1
- return data_dict
-
- def download_dataset_artifact(self, path, alias):
- if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX):
- artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias)
- dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\","/"))
- assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'"
- datadir = dataset_artifact.download()
- return datadir, dataset_artifact
- return None, None
-
- def download_model_artifact(self, opt):
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest")
- assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist'
- modeldir = model_artifact.download()
- epochs_trained = model_artifact.metadata.get('epochs_trained')
- total_epochs = model_artifact.metadata.get('total_epochs')
- is_finished = total_epochs is None
- assert not is_finished, 'training is finished, can only resume incomplete runs.'
- return modeldir, model_artifact
- return None, None
-
- def log_model(self, path, opt, epoch, fitness_score, best_model=False):
- model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', type='model', metadata={
- 'original_url': str(path),
- 'epochs_trained': epoch + 1,
- 'save period': opt.save_period,
- 'project': opt.project,
- 'total_epochs': opt.epochs,
- 'fitness_score': fitness_score
- })
- model_artifact.add_file(str(path / 'last.pt'), name='last.pt')
- wandb.log_artifact(model_artifact,
- aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else ''])
- print("Saving model artifact on epoch ", epoch + 1)
-
- def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False):
- with open(data_file) as f:
- data = yaml.safe_load(f) # data dict
- check_dataset(data)
- nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names'])
- names = {k: v for k, v in enumerate(names)} # to index dictionary
- self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(
- data['train'], rect=True, batch_size=1), names, name='train') if data.get('train') else None
- self.val_artifact = self.create_dataset_table(LoadImagesAndLabels(
- data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None
- if data.get('train'):
- data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train')
- if data.get('val'):
- data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val')
- path = data_file if overwrite_config else '_wandb.'.join(data_file.rsplit('.', 1)) # updated data.yaml path
- data.pop('download', None)
- data.pop('path', None)
- with open(path, 'w') as f:
- yaml.safe_dump(data, f)
-
- if self.job_type == 'Training': # builds correct artifact pipeline graph
- self.wandb_run.use_artifact(self.val_artifact)
- self.wandb_run.use_artifact(self.train_artifact)
- self.val_artifact.wait()
- self.val_table = self.val_artifact.get('val')
- self.map_val_table_path()
- else:
- self.wandb_run.log_artifact(self.train_artifact)
- self.wandb_run.log_artifact(self.val_artifact)
- return path
-
- def map_val_table_path(self):
- self.val_table_map = {}
- print("Mapping dataset")
- for i, data in enumerate(tqdm(self.val_table.data)):
- self.val_table_map[data[3]] = data[0]
-
- def create_dataset_table(self, dataset, class_to_id, name='dataset'):
- # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging
- artifact = wandb.Artifact(name=name, type="dataset")
- img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None
- img_files = tqdm(dataset.img_files) if not img_files else img_files
- for img_file in img_files:
- if Path(img_file).is_dir():
- artifact.add_dir(img_file, name='data/images')
- labels_path = 'labels'.join(dataset.path.rsplit('images', 1))
- artifact.add_dir(labels_path, name='data/labels')
- else:
- artifact.add_file(img_file, name='data/images/' + Path(img_file).name)
- label_file = Path(img2label_paths([img_file])[0])
- artifact.add_file(str(label_file),
- name='data/labels/' + label_file.name) if label_file.exists() else None
- table = wandb.Table(columns=["id", "train_image", "Classes", "name"])
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()])
- for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)):
- box_data, img_classes = [], {}
- for cls, *xywh in labels[:, 1:].tolist():
- cls = int(cls)
- box_data.append({"position": {"middle": [xywh[0], xywh[1]], "width": xywh[2], "height": xywh[3]},
- "class_id": cls,
- "box_caption": "%s" % (class_to_id[cls])})
- img_classes[cls] = class_to_id[cls]
- boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space
- table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()),
- Path(paths).name)
- artifact.add(table, name)
- return artifact
-
- def log_training_progress(self, predn, path, names):
- if self.val_table and self.result_table:
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()])
- box_data = []
- total_conf = 0
- for *xyxy, conf, cls in predn.tolist():
- if conf >= 0.25:
- box_data.append(
- {"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": "%s %.3f" % (names[cls], conf),
- "scores": {"class_score": conf},
- "domain": "pixel"})
- total_conf = total_conf + conf
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- id = self.val_table_map[Path(path).name]
- self.result_table.add_data(self.current_epoch,
- id,
- self.val_table.data[id][1],
- wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set),
- total_conf / max(1, len(box_data))
- )
-
- def log(self, log_dict):
- if self.wandb_run:
- for key, value in log_dict.items():
- self.log_dict[key] = value
-
- def end_epoch(self, best_result=False):
- if self.wandb_run:
- with all_logging_disabled():
- wandb.log(self.log_dict)
- self.log_dict = {}
- if self.result_artifact:
- self.result_artifact.add(self.result_table, 'result')
- wandb.log_artifact(self.result_artifact, aliases=['latest', 'last', 'epoch ' + str(self.current_epoch),
- ('best' if best_result else '')])
-
- wandb.log({"evaluation": self.result_table})
- self.result_table = wandb.Table(["epoch", "id", "ground truth", "prediction", "avg_confidence"])
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
-
- def finish_run(self):
- if self.wandb_run:
- if self.log_dict:
- with all_logging_disabled():
- wandb.log(self.log_dict)
- wandb.run.finish()
-
-
-@contextmanager
-def all_logging_disabled(highest_level=logging.CRITICAL):
- """ source - https://gist.github.com/simon-weber/7853144
- A context manager that will prevent any logging messages triggered during the body from being processed.
- :param highest_level: the maximum logging level in use.
- This would only need to be changed if a custom level greater than CRITICAL is defined.
- """
- previous_level = logging.root.manager.disable
- logging.disable(highest_level)
- try:
- yield
- finally:
- logging.disable(previous_level)
diff --git a/cv/detection/yolov6/pytorch/.gitignore b/cv/detection/yolov6/pytorch/.gitignore
deleted file mode 100644
index 11fa492a2a6df6ed93ee96f439e9d2b83faccd95..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/.gitignore
+++ /dev/null
@@ -1,119 +0,0 @@
-coco
-
-# Byte-compiled / optimized / DLL files
-__pycache__/
-*.py[cod]
-*$py.class
-**/*.pyc
-
-# C extensions
-
-# Distribution / packaging
-
-.Python
-videos/
-build/
-runs/
-weights/
-develop-eggs/
-dist/
-downloads/
-eggs/
-.eggs/
-parts/
-sdist/
-var/
-wheels/
-*.egg-info/
-.installed.cfg
-*.egg
-MANIFEST
-
-# PyInstaller
-# Usually these files are written by a python script from a template
-# before PyInstaller builds the exe, so as to inject date/other infos into it.
-*.manifest
-*.spec
-
-# Installer logs
-pip-log.txt
-pip-delete-this-directory.txt
-
-# Unit test / coverage reports
-htmlcov/
-.tox/
-.coverage
-.coverage.*
-.cache
-nosetests.xml
-coverage.xml
-*.cover
-.hypothesis/
-.pytest_cache/
-
-# Translations
-*.mo
-*.pot
-
-# Django stuff:
-*.log
-local_settings.py
-db.sqlite3
-
-# Flask stuff:
-instance/
-.webassets-cache
-
-# Scrapy stuff:
-.scrapy
-
-# Sphinx documentation
-docs/_build/
-
-# PyBuilder
-target/
-
-# Jupyter Notebook
-.ipynb_checkpoints
-
-# pyenv
-.python-version
-
-# celery beat schedule file
-celerybeat-schedule
-
-# SageMath parsed files
-*.sage.py
-
-# Environments
-.env
-.venv
-env/
-venv/
-ENV/
-env.bak/
-venv.bak/
-
-# Spyder project settings
-.spyderproject
-.spyproject
-
-# Rope project settings
-.ropeproject
-
-# custom
-.DS_Store
-
-# Pytorch
-*.pth
-
-#vscode
-.vscode/*
-
-#user scripts
-*.sh
-
-# model files
-*.onnx
-*.pt
-*.engine
diff --git a/cv/detection/yolov6/pytorch/LICENSE b/cv/detection/yolov6/pytorch/LICENSE
deleted file mode 100644
index f288702d2fa16d3cdf0035b15a9fcbc552cd88e7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/LICENSE
+++ /dev/null
@@ -1,674 +0,0 @@
- GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
diff --git a/cv/detection/yolov6/pytorch/README.md b/cv/detection/yolov6/pytorch/README.md
index 47b6daf8ac352a90c816e40fcf564dfa6956a292..6c6f4e951939c67a2c5a36ad64696184ae65a9b9 100644
--- a/cv/detection/yolov6/pytorch/README.md
+++ b/cv/detection/yolov6/pytorch/README.md
@@ -1,14 +1,17 @@
# YOLOv6
+
## Model description
+
For years, the YOLO series has been the de facto industry-level standard for efficient object detection. The YOLO community has prospered overwhelmingly to enrich its use in a multitude of hardware platforms and abundant scenarios. In this technical report, we strive to push its limits to the next level, stepping forward with an unwavering mindset for industry application.
-Considering the diverse requirements for speed and accuracy in the real environment, we extensively examine the up-to-date object detection advancements either from industry or academia. Specifically, we heavily assimilate ideas from recent network design, training strategies, testing techniques, quantization, and optimization methods. On top of this, we integrate our thoughts and practice to build a suite of deployment-ready networks at various scales to accommodate diversified use cases. With the generous permission of YOLO authors, we name it YOLOv6. We also express our warm welcome to users and contributors for further enhancement. For a glimpse of performance, our YOLOv6-N hits 35.9% AP on the COCO dataset at a throughput of 1234 FPS on an NVIDIA Tesla T4 GPU. YOLOv6-S strikes 43.5% AP at 495 FPS, outperforming other mainstream detectors at the same scale~(YOLOv5-S, YOLOX-S, and PPYOLOE-S). Our quantized version of YOLOv6-S even brings a new state-of-the-art 43.3% AP at 869 FPS. Furthermore, YOLOv6-M/L also achieves better accuracy performance (i.e., 49.5%/52.3%) than other detectors with a similar inference speed. We carefully conducted experiments to validate the effectiveness of each component.
+Considering the diverse requirements for speed and accuracy in the real environment, we extensively examine the up-to-date object detection advancements either from industry or academia. Specifically, we heavily assimilate ideas from recent network design, training strategies, testing techniques, quantization, and optimization methods. On top of this, we integrate our thoughts and practice to build a suite of deployment-ready networks at various scales to accommodate diversified use cases. With the generous permission of YOLO authors, we name it YOLOv6. We also express our warm welcome to users and contributors for further enhancement. For a glimpse of performance, our YOLOv6-N hits 35.9% AP on the COCO dataset at a throughput of 1234 FPS on an NVIDIA Tesla T4 GPU. YOLOv6-S strikes 43.5% AP at 495 FPS, outperforming other mainstream detectors at the same scale~(YOLOv5-S, YOLOX-S, and PPYOLOE-S). Our quantized version of YOLOv6-S even brings a new state-of-the-art 43.3% AP at 869 FPS. Furthermore, YOLOv6-M/L also achieves better accuracy performance (i.e., 49.5%/52.3%) than other detectors with a similar inference speed. We carefully conducted experiments to validate the effectiveness of each component.
Implementation of paper:
+
- [YOLOv6 v3.0: A Full-Scale Reloading](https://arxiv.org/abs/2301.05586) 🔥
- [YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications](https://arxiv.org/abs/2209.02976)
-
## Installing packages
-```
+
+```bash
## install libGL
yum install mesa-libGL
@@ -19,16 +22,19 @@ cd zlib-1.2.9/
./configure && make install
cd ..
rm -rf zlib-1.2.9.tar.gz zlib-1.2.9/
-```
-```
+## clone yolov6
+git clone https://gitee.com/deep-spark/deepsparkhub-GPL.git
+cd deepsparkhub-GPL/cv/detection/yolov6/pytorch/
pip3 install -r requirements.txt
```
## Preparing datasets
+
- data: prepare dataset and specify dataset paths in data.yaml ( [COCO](http://cocodataset.org), [YOLO format coco labels](https://github.com/meituan/YOLOv6/releases/download/0.1.0/coco2017labels.zip) )
- make sure your dataset structure as follows:
-```
+
+```bash
├── coco
│ ├── annotations
│ │ ├── instances_train2017.json
@@ -45,24 +51,26 @@ pip3 install -r requirements.txt
## Training
+> After training, reporting "AttributeError: 'NoneType' object has no attribute 'python_exit_status'" is a [known issue](https://github.com/meituan/YOLOv6/issues/506), add "--workers 0" if you want to avoid.
+
Single gpu train
-```
+```bash
python3 tools/train.py --batch 32 --conf configs/yolov6s.py --data data/coco.yaml --epoch 300 --name yolov6s_coco
```
Multiple gpu train
-```
+
+```bash
python3 -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256 --conf configs/yolov6s.py --data data/coco.yaml --epoch 300 --name yolov6s_coco --device 0,1,2,3,4,5,6,7
```
## Training Results
-Model | Size | mAPval
0.5:0.95 | mAPval
0.5 |
-| :----------------------------------------------------------- | ---- | :----------------------- | --------------------------------------- |
-| YOLOv6-S| 640 | 44.3 | 61.3 |
-## Remark
-After training, reporting "AttributeError: 'NoneType' object has no attribute 'python_exit_status'" is a [known issue](https://github.com/meituan/YOLOv6/issues/506), add "--workers 0" if you want to avoid.
+| Model | Size | mAPval
0.5:0.95 | mAPval
0.5 |
+| :------- | ---- | :----------------------- | ------------------- |
+| YOLOv6-S | 640 | 44.3 | 61.3 |
## Reference
-https://github.com/meituan/YOLOv6
\ No newline at end of file
+
+- [YOLOv6](https://github.com/meituan/YOLOv6)
diff --git a/cv/detection/yolov6/pytorch/configs/base/README.md b/cv/detection/yolov6/pytorch/configs/base/README.md
deleted file mode 100644
index 77ef5a4e9c7f99e60b7b51f4e366c67477c35f22..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
-## YOLOv6 base model
-
-English | [简体中文](./README_cn.md)
-
-### Features
-
-- Use only regular convolution and Relu activation functions.
-
-- Apply CSP (1/2 channel dim) blocks in the network structure, except for Nano base model.
-
-Advantage:
-- Adopt a unified network structure and configuration, and the accuracy loss of the PTQ 8-bit quantization model is negligible.
-- Suitable for users who are just getting started or who need to apply, optimize and deploy an 8-bit quantization model quickly and frequently.
-
-
-### Performance
-
-| Model | Size | mAPval
0.5:0.95 | SpeedT4
TRT FP16 b1
(FPS) | SpeedT4
TRT FP16 b32
(FPS) | SpeedT4
TRT INT8 b1
(FPS) | SpeedT4
TRT INT8 b32
(FPS) | Params
(M) | FLOPs
(G) |
-| :--------------------------------------------------------------------------------------------- | --- | ----------------- | ----- | ---- | ---- | ---- | ----- | ------ |
-| [**YOLOv6-N-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6n_base.pt) | 640 | 36.6distill | 727 | 1302 | 814 | 1805 | 4.65 | 11.46 |
-| [**YOLOv6-S-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6s_base.pt) | 640 | 45.3distill | 346 | 525 | 487 | 908 | 13.14 | 30.6 |
-| [**YOLOv6-M-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6m_base.pt) | 640 | 49.4distill | 179 | 245 | 284 | 439 | 28.33 | 72.30 |
-| [**YOLOv6-L-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6l_base.pt) | 640 | 51.1distill | 116 | 157 | 196 | 288 | 59.61 | 150.89 |
-
-- Speed is tested with TensorRT 8.2.4.2 on T4.
-- The processes of model training, evaluation, and inference are the same as the original ones. For details, please refer to [this README](https://github.com/meituan/YOLOv6#quick-start).
diff --git a/cv/detection/yolov6/pytorch/configs/base/README_cn.md b/cv/detection/yolov6/pytorch/configs/base/README_cn.md
deleted file mode 100644
index b6b01d14487161dd1f0973c2550e347c2f637d62..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/README_cn.md
+++ /dev/null
@@ -1,25 +0,0 @@
-## YOLOv6 基础版模型
-
-简体中文 | [English](./README.md)
-
-### 模型特点
-
-- 仅使用常规卷积和Relu激活函数
-
-- 网络结构均采用CSP (1/2通道) block,Nano网络除外。
-
-优势:
-- 采用统一的网络结构和配置,且 PTQ 8位量化模型精度损失较小,适合刚入门或有快速迭代部署8位量化模型需求的用户。
-
-
-### 模型指标
-
-| 模型 | 尺寸 | mAPval
0.5:0.95 | 速度T4
TRT FP16 b1
(FPS) | 速度T4
TRT FP16 b32
(FPS) | 速度T4
TRT INT8 b1
(FPS) | 速度T4
TRT INT8 b32
(FPS) | Params
(M) | FLOPs
(G) |
-| :--------------------------------------------------------------------------------------------- | --- | ----------------- | ----- | ---- | ---- | ---- | ----- | ------ |
-| [**YOLOv6-N-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6n_base.pt) | 640 | 36.6distill | 727 | 1302 | 814 | 1805 | 4.65 | 11.46 |
-| [**YOLOv6-S-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6s_base.pt) | 640 | 45.3distill | 346 | 525 | 487 | 908 | 13.14 | 30.6 |
-| [**YOLOv6-M-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6m_base.pt) | 640 | 49.4distill | 179 | 245 | 284 | 439 | 28.33 | 72.30 |
-| [**YOLOv6-L-base**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6l_base.pt) | 640 | 51.1distill | 116 | 157 | 196 | 288 | 59.61 | 150.89 |
-
-- 速度是在 T4 上测试的,TensorRT 版本为 8.4.2.4;
-- 模型训练、评估、推理流程与原来保持一致,具体可参考 [首页 README 文档](https://github.com/meituan/YOLOv6/blob/main/README_cn.md#%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B)。
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6l_base.py b/cv/detection/yolov6/pytorch/configs/base/yolov6l_base.py
deleted file mode 100644
index ef2dbbb239c314cefe8f4ca91b513cdd8ea81766..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6l_base.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# YOLOv6l large base model
-model = dict(
- type='YOLOv6l_base',
- pretrained=None,
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6l_base_finetune.py b/cv/detection/yolov6/pytorch/configs/base/yolov6l_base_finetune.py
deleted file mode 100644
index 7e8dc062672a402a50e31627f7a7243cd961ce62..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6l_base_finetune.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# YOLOv6 large base model
-model = dict(
- type='YOLOv6l_base',
- depth_multiple=1.0,
- width_multiple=1.0,
- pretrained=None,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6m_base.py b/cv/detection/yolov6/pytorch/configs/base/yolov6m_base.py
deleted file mode 100644
index 5670f096cf9f38b2790e968042093b44f603b381..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6m_base.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# YOLOv6m medium/large base model
-model = dict(
- type='YOLOv6m_base',
- pretrained=None,
- depth_multiple=0.80,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 0.8,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6m_base_finetune.py b/cv/detection/yolov6/pytorch/configs/base/yolov6m_base_finetune.py
deleted file mode 100644
index af5449ec19681993b7c51ab4e3b95adb36727943..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6m_base_finetune.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# YOLOv6m medium/large base model
-model = dict(
- type='YOLOv6m_base',
- pretrained=None,
- depth_multiple=0.80,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 0.8,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6n_base.py b/cv/detection/yolov6/pytorch/configs/base/yolov6n_base.py
deleted file mode 100644
index 8340ca6024b4a0d2c1c01c0f5f1d0ac832136348..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6n_base.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# YOLOv6s nano base model
-model = dict(
- type='YOLOv6n_base',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True, # set to True if you want to further train with distillation
- reg_max=16, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6n_base_finetune.py b/cv/detection/yolov6/pytorch/configs/base/yolov6n_base_finetune.py
deleted file mode 100644
index 593c3def90184a7eae3acdee87f49e7d5aecfac0..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6n_base_finetune.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# YOLOv6s nanao base model
-model = dict(
- type='YOLOv6n_base',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6s_base.py b/cv/detection/yolov6/pytorch/configs/base/yolov6s_base.py
deleted file mode 100644
index 4e28c17858c14837ba2bacc88aaf93cb8847f086..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6s_base.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# YOLOv6s small base model
-model = dict(
- type='YOLOv6s_base',
- pretrained=None,
- depth_multiple=0.70,
- width_multiple=0.50,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',#CSPRepPANNeck
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True, # set to True if you want to further train with distillation
- reg_max=16, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/base/yolov6s_base_finetune.py b/cv/detection/yolov6/pytorch/configs/base/yolov6s_base_finetune.py
deleted file mode 100644
index eb4d2159aa9c55032d1b0b8abfce0e92940dc262..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/base/yolov6s_base_finetune.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# YOLOv6s small base model
-model = dict(
- type='YOLOv6s_base',
- pretrained=None,
- depth_multiple=0.70,
- width_multiple=0.50,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-training_mode = "conv_relu"
diff --git a/cv/detection/yolov6/pytorch/configs/experiment/eval_640_repro.py b/cv/detection/yolov6/pytorch/configs/experiment/eval_640_repro.py
deleted file mode 100644
index 1f6a6217e5f2efbc52af22db96dadfa86355de2c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/experiment/eval_640_repro.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# eval param for different scale
-
-eval_params = dict(
- default = dict(
- img_size=640,
- shrink_size=2,
- infer_on_rect=False,
- ),
- yolov6n = dict(
- img_size=640,
- shrink_size=4,
- infer_on_rect=False,
- ),
- yolov6t = dict(
- img_size=640,
- shrink_size=6,
- infer_on_rect=False,
- ),
- yolov6s = dict(
- img_size=640,
- shrink_size=6,
- infer_on_rect=False,
- ),
- yolov6m = dict(
- img_size=640,
- shrink_size=4,
- infer_on_rect=False,
- ),
- yolov6l = dict(
- img_size=640,
- shrink_size=4,
- infer_on_rect=False,
- ),
- yolov6l_relu = dict(
- img_size=640,
- shrink_size=2,
- infer_on_rect=False,
- ),
- yolov6n6 = dict(
- img_size=1280,
- shrink_size=17,
- infer_on_rect=False,
- ),
- yolov6s6 = dict(
- img_size=1280,
- shrink_size=8,
- infer_on_rect=False,
- ),
- yolov6m6 = dict(
- img_size=1280,
- shrink_size=64,
- infer_on_rect=False,
- ),
- yolov6l6 = dict(
- img_size=1280,
- shrink_size=41,
- infer_on_rect=False,
- ),
- yolov6s_mbla = dict(
- img_size=640,
- shrink_size=7,
- infer_on_rect=False,
- ),
- yolov6m_mbla = dict(
- img_size=640,
- shrink_size=7,
- infer_on_rect=False,
- ),
- yolov6l_mbla = dict(
- img_size=640,
- shrink_size=7,
- infer_on_rect=False,
- ),
- yolov6x_mbla = dict(
- img_size=640,
- shrink_size=3,
- infer_on_rect=False,
- )
-)
diff --git a/cv/detection/yolov6/pytorch/configs/experiment/yolov6n_with_eval_params.py b/cv/detection/yolov6/pytorch/configs/experiment/yolov6n_with_eval_params.py
deleted file mode 100644
index e7366b334748465f949b7054db0f1803c72b6534..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/experiment/yolov6n_with_eval_params.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# YOLOv6n model with eval param(when traing)
-model = dict(
- type='YOLOv6n',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.02, #0.01 # 0.02
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-# Eval params when eval model.
-# If eval_params item is list, eg conf_thres=[0.03, 0.03],
-# first will be used in train.py and second will be used in eval.py.
-eval_params = dict(
- batch_size=None, #None mean will be the same as batch on one device * 2
- img_size=None, #None mean will be the same as train image size
- conf_thres=0.03,
- iou_thres=0.65,
-
- #pading and scale coord
- shrink_size=None, # None mean will not shrink the image.
- infer_on_rect=True,
-
- #metric
- verbose=False,
- do_coco_metric=True,
- do_pr_metric=False,
- plot_curve=False,
- plot_confusion_matrix=False
-)
diff --git a/cv/detection/yolov6/pytorch/configs/experiment/yolov6s_csp_scaled.py b/cv/detection/yolov6/pytorch/configs/experiment/yolov6s_csp_scaled.py
deleted file mode 100644
index ba28843acfafa85957bc294c69c42019c99ec5f4..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/experiment/yolov6s_csp_scaled.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# YOLOv6m model
-model = dict(
- type='YOLOv6s_csp',
- pretrained=None,
- depth_multiple=0.70,
- width_multiple=0.50,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- ),
- neck=dict(
- type='CSPRepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='giou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/experiment/yolov6t.py b/cv/detection/yolov6/pytorch/configs/experiment/yolov6t.py
deleted file mode 100644
index afacd436ce180947c9cf74d5513da234891915b8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/experiment/yolov6t.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# YOLOv6t model
-model = dict(
- type='YOLOv6t',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.375,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/experiment/yolov6t_csp_scaled.py b/cv/detection/yolov6/pytorch/configs/experiment/yolov6t_csp_scaled.py
deleted file mode 100644
index e8ba99a90633af3ab2f62a486dc9fdebafdf85e0..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/experiment/yolov6t_csp_scaled.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n_csp',
- pretrained=None,
- depth_multiple=0.60,
- width_multiple=0.50,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- ),
- neck=dict(
- type='CSPRepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='giou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/experiment/yolov6t_finetune.py b/cv/detection/yolov6/pytorch/configs/experiment/yolov6t_finetune.py
deleted file mode 100644
index 8be474166e22da3271dad1bf002c33f35b7ddc64..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/experiment/yolov6t_finetune.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# YOLOv6t model
-model = dict(
- type='YOLOv6t',
- pretrained='weights/yolov6t.pt',
- depth_multiple=0.33,
- width_multiple=0.375,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/README.md b/cv/detection/yolov6/pytorch/configs/mbla/README.md
deleted file mode 100644
index d163124d6810aa2f35379e60cb8bff6e600e395c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
-## YOLOv6 mbla model
-
-English | [简体中文](./README_cn.md)
-
-### Features
-
-- Apply MBLABlock(Multi Branch Layer Aggregation Block) blocks in the network structure.
-
-Advantage:
-- Adopt a unified network structure and configuration.
-
-- Better performance for Small model comparing to yolov6 3.0 release.
-
-- Better performance comparing to yolov6 3.0 base.
-
-
-
-### Performance
-
-| Model | Size | mAPval
0.5:0.95 | SpeedT4
trt fp16 b1
(fps) | SpeedT4
trt fp16 b32
(fps) | Params
(M) | FLOPs
(G) |
-| :----------------------------------------------------------- | -------- | :----------------------- | -------------------------------------- | --------------------------------------- | -------------------- | ------------------- |
-| [**YOLOv6-S-mbla**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6s_mbla.pt) | 640 | 47.0distill | 300 | 424 | 11.6 | 29.8 |
-| [**YOLOv6-M-mbla**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6m_mbla.pt) | 640 | 50.3distill | 168 | 216 | 26.1 | 66.7 |
-| [**YOLOv6-L-mbla**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6l_base.pt) | 640 | 52.0distill | 129 | 154 | 46.3 | 118.2 |
-| [**YOLOv6-X-base**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6x_base.pt) | 640 | 53.5distill | 78 | 94 | 78.8 | 199.0 |
-
-- Speed is tested with TensorRT 8.4.2.4 on T4.
-- The processes of model training, evaluation, and inference are the same as the original ones. For details, please refer to [this README](https://github.com/meituan/YOLOv6#quick-start).
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/README_cn.md b/cv/detection/yolov6/pytorch/configs/mbla/README_cn.md
deleted file mode 100644
index ad399fe094f7e5026e52a4f50153550607021682..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/README_cn.md
+++ /dev/null
@@ -1,26 +0,0 @@
-## YOLOv6 MBLA版模型
-
-简体中文 | [English](./README.md)
-
-### 模型特点
-
-- 网络主体结构均采用MBLABlock(Multi Branch Layer Aggregation Block)
-
-优势:
-- 采用统一的网络结构和配置
-
-- 相比3.0版本在s尺度效果提升,相比3.0base版本各尺度效果提升
-
-
-
-### 模型指标
-
-| 模型 | 输入尺寸 | mAPval
0.5:0.95 | 速度T4
trt fp16 b1
(fps) | 速度T4
trt fp16 b32
(fps) | Params
(M) | FLOPs
(G) |
-| :----------------------------------------------------------- | -------- | :----------------------- | -------------------------------------- | --------------------------------------- | -------------------- | ------------------- |
-| [**YOLOv6-S-mbla**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6s_mbla.pt) | 640 | 47.0distill | 300 | 424 | 11.6 | 29.8 |
-| [**YOLOv6-M-mbla**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6m_mbla.pt) | 640 | 50.3distill | 168 | 216 | 26.1 | 66.7 |
-| [**YOLOv6-L-mbla**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6l_base.pt) | 640 | 52.0distill | 129 | 154 | 46.3 | 118.2 |
-| [**YOLOv6-X-base**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6x_base.pt) | 640 | 53.5distill | 78 | 94 | 78.8 | 199.0 |
-
-- 速度是在 T4 上测试的,TensorRT 版本为 8.4.2.4;
-- 模型训练、评估、推理流程与原来保持一致,具体可参考 [首页 README 文档](https://github.com/meituan/YOLOv6/blob/main/README_cn.md#%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B)。
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6l_mbla.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6l_mbla.py
deleted file mode 100644
index 7534b70541a2b022e1562153587db11aed25cf20..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6l_mbla.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6l_mbla',
- pretrained=None,
- depth_multiple=0.5,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6l_mbla_finetune.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6l_mbla_finetune.py
deleted file mode 100644
index 6ea88967c5f9af8c930d0e85859f85a9de137959..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6l_mbla_finetune.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6l_mbla',
- pretrained=None,
- depth_multiple=0.5,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6m_mbla.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6m_mbla.py
deleted file mode 100644
index f84fc43d14aec7a9ff80078f3b871de5fd990a02..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6m_mbla.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6m_mbla',
- pretrained=None,
- depth_multiple=0.5,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6m_mbla_finetune.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6m_mbla_finetune.py
deleted file mode 100644
index aa0bc816a67ca8ee2b4f84290f630c94a638768b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6m_mbla_finetune.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6m_mbla',
- pretrained=None,
- depth_multiple=0.5,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6s_mbla.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6s_mbla.py
deleted file mode 100644
index eedc76eec2454209f38c749d568dfa1e2f9b1d05..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6s_mbla.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6s_mbla',
- pretrained=None,
- depth_multiple=0.5,
- width_multiple=0.5,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6s_mbla_finetune.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6s_mbla_finetune.py
deleted file mode 100644
index a9812c7166a0ccb2b26e9c089b023993820fbe59..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6s_mbla_finetune.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6s_mbla',
- pretrained=None,
- depth_multiple=0.5,
- width_multiple=0.5,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6x_mbla.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6x_mbla.py
deleted file mode 100644
index b7b9703c2e85d6d803e9a83aadb3b341a3e76906..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6x_mbla.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6x_mbla',
- pretrained=None,
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/mbla/yolov6x_mbla_finetune.py b/cv/detection/yolov6/pytorch/configs/mbla/yolov6x_mbla_finetune.py
deleted file mode 100644
index 65c57cb21e1954dc492ae6b4bc05044f6703a987..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/mbla/yolov6x_mbla_finetune.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6x_mbla',
- pretrained=None,
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 4, 8, 8, 4],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- stage_block_type="MBLABlock",
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[8, 8, 8, 8],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- stage_block_type="MBLABlock",
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/qarepvgg/README.md b/cv/detection/yolov6/pytorch/configs/qarepvgg/README.md
deleted file mode 100644
index 81b130d28b7c91d6b456a5b3ece53927b08ae09e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/qarepvgg/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
-## YOLOv6 base model
-
-English | [简体中文](./README_cn.md)
-
-### Features
-
-- This is a RepOpt-version implementation of YOLOv6 according to [QARepVGG](https://arxiv.org/abs/2212.01593).
-
-- The QARep version models possess slightly lower float accuracy on COCO than the RepVGG version models, but achieve highly improved quantized accuracy.
-
-- The INT8 accuracies listed were obtained using a simple PTQ process, as implemented in the [`onnx_to_trt.py`](../../deploy/TensorRT/onnx_to_trt.py) script. However, higher accuracies could be achieved using Quantization-Aware Training (QAT) due to the specific architecture design of the QARepVGG model.
-
-### Performance
-
-| Model | Size | Float
mAPval
0.5:0.95 | INT8
mAPval
0.5:0.95 | SpeedT4
trt fp16 b32
(fps) | SpeedT4
trt int8 b32
(fps) | Params
(M) | FLOPs
(G) |
-| :----------------------------------------------------------- | -------- | :----------------------- | -------------------------------------- | --------------------------------------- | -------------------- | ------------------- | -------------------- |
-| [**YOLOv6-N**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6n.pt) | 640 | 37.5 | 34.3 | 1286 | 1773 |4.7 | 11.4 |
-| [**YOLOv6-N-qa**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6n_qa.pt) | 640 | 37.1 | 36.4 | 1286 | 1773 | 4.7 | 11.4 |
-| [**YOLOv6-S**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6s.pt) | 640 | 45.0 | 41.3 | 513 | 1117 | 18.5 | 45.3 |
-| [**YOLOv6-S-qa**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6s_qa.pt) | 640 | 44.7 | 44.0 | 513 | 1117 | 18.5 | 45.3 |
-| [**YOLOv6-M**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6m.pt) | 640 | 50.0 | 48.1 | 250 | 439 | 34.9 | 85.8 |
-| [**YOLOv6-M-qa**](https://github.com/meituan/YOLOv6/releases/download/0.3.0/yolov6m_qa.pt) | 640 | 49.7 | 49.4 | 250 | 439 | 34.9 | 85.8 |
-
-- Speed is tested with TensorRT 8.4 on T4.
-- We have not conducted experiments on the YOLOv6-L model since it does not use the RepVGG architecture.
-- The processes of model training, evaluation, and inference are the same as the original ones. For details, please refer to [this README](https://github.com/meituan/YOLOv6#quick-start).
diff --git a/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6m_qa.py b/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6m_qa.py
deleted file mode 100644
index c0690f15e791ca0383515f94240d08f7e2896254..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6m_qa.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# YOLOv6m model
-model = dict(
- type='YOLOv6m',
- pretrained=None,
- depth_multiple=0.60,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(2)/3,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(2)/3,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 0.8,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-
-training_mode='qarepvggv2'
diff --git a/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6n_qa.py b/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6n_qa.py
deleted file mode 100644
index b42d9ddb4b29de2a79694a9d03e708369cbeba55..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6n_qa.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6n',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.02,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-training_mode='qarepvggv2'
diff --git a/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6s_qa.py b/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6s_qa.py
deleted file mode 100644
index 3051679a25844dace2b2ff465f0ab7a5ba9af094..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/qarepvgg/yolov6s_qa.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6s',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-training_mode='qarepvggv2'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_hs.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_hs.py
deleted file mode 100644
index 70a74279c8d36f1b536c25abcd65c781da610fe9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_hs.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# YOLOv6t model
-model = dict(
- type='YOLOv6t',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.375,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='hyper_search'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_opt.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_opt.py
deleted file mode 100644
index 95dbf3178a5597305795505e0eba263802bace24..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_opt.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# YOLOv6t model
-model = dict(
- type='YOLOv6t',
- pretrained=None,
- scales='../yolov6_assert/v6t_v2_scale_last.pt',
- depth_multiple=0.33,
- width_multiple=0.375,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='repopt'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_opt_qat.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_opt_qat.py
deleted file mode 100644
index 701bf4f1d896458e64aca3dd969e5b5541e6ce17..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6_tiny_opt_qat.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# YOLOv6t model
-model = dict(
- type='YOLOv6t',
- pretrained='./assets/v6s_t.pt',
- scales='./assets/v6t_v2_scale_last.pt',
- depth_multiple=0.33,
- width_multiple=0.375,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False,
- reg_max=0, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.00001,
- lrf=0.001,
- momentum=0.937,
- weight_decay=0.00005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-ptq = dict(
- num_bits = 8,
- calib_batches = 4,
- # 'max', 'histogram'
- calib_method = 'max',
- # 'entropy', 'percentile', 'mse'
- histogram_amax_method='entropy',
- histogram_amax_percentile=99.99,
- calib_output_path='./',
- sensitive_layers_skip=False,
- sensitive_layers_list=[],
-)
-
-qat = dict(
- calib_pt = './assets/v6s_t_calib_max.pt',
- sensitive_layers_skip = False,
- sensitive_layers_list=[],
-)
-
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='repopt'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_hs.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_hs.py
deleted file mode 100644
index 67607ba2823a3dd1654d0591a82c1c5ee575987e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_hs.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.02, #0.01 # 0.02
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='hyper_search'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_opt.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_opt.py
deleted file mode 100644
index 9b3db4fbf52356bcb1262b91f710ea0c60b7ebe6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_opt.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n',
- pretrained=None,
- scales='../yolov6_assert/v6n_v2_scale_last.pt',
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.02, #0.01 # 0.02
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='repopt'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_opt_qat.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_opt_qat.py
deleted file mode 100644
index 4e76dfd3c41cd81351646f03f5467830125fd2c4..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6n_opt_qat.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n',
- pretrained='./assets/v6s_n.pt',
- scales='./assets/v6n_v2_scale_last.pt',
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False,
- reg_max=0, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.00001, #0.01 # 0.02
- lrf=0.001,
- momentum=0.937,
- weight_decay=0.00005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-ptq = dict(
- num_bits = 8,
- calib_batches = 4,
- # 'max', 'histogram'
- calib_method = 'max',
- # 'entropy', 'percentile', 'mse'
- histogram_amax_method='entropy',
- histogram_amax_percentile=99.99,
- calib_output_path='./',
- sensitive_layers_skip=False,
- sensitive_layers_list=[],
-)
-
-qat = dict(
- calib_pt = './assets/v6s_n_calib_max.pt',
- sensitive_layers_skip = False,
- sensitive_layers_list=[],
-)
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='repopt'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_hs.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_hs.py
deleted file mode 100644
index 60c7286a1b21ee791acdc88afecb44b2e05c6ba7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_hs.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6s',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False,
- reg_max=0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='hyper_search'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_opt.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_opt.py
deleted file mode 100644
index 2676eb4f147bbd32deeeb3e2fdf8659f2cb43bbe..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_opt.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6s',
- pretrained=None,
- scales='../yolov6_assert/v6s_v2_scale.pt',
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False,
- reg_max=0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='repopt'
diff --git a/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_opt_qat.py b/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_opt_qat.py
deleted file mode 100644
index a41ea085c863e3c3b460921f345e0b21e27234e5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/repopt/yolov6s_opt_qat.py
+++ /dev/null
@@ -1,113 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6s',
- pretrained='./assets/yolov6s_v2_reopt_43.1.pt',
- scales='./assets/yolov6s_v2_scale.pt',
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- ),
- neck=dict(
- type='RepPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=1,
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type = 'giou',
- use_dfl = False,
- reg_max = 0, # if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.00001,
- lrf=0.001,
- momentum=0.937,
- weight_decay=0.00005,
- warmup_epochs=3,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
-
-ptq = dict(
- num_bits = 8,
- calib_batches = 4,
- # 'max', 'histogram'
- calib_method = 'histogram',
- # 'entropy', 'percentile', 'mse'
- histogram_amax_method='entropy',
- histogram_amax_percentile=99.99,
- calib_output_path='./',
- sensitive_layers_skip=False,
- sensitive_layers_list=['detect.stems.0.conv',
- 'detect.stems.1.conv',
- 'detect.stems.2.conv',
- 'detect.cls_convs.0.conv',
- 'detect.cls_convs.1.conv',
- 'detect.cls_convs.2.conv',
- 'detect.reg_convs.0.conv',
- 'detect.reg_convs.1.conv',
- 'detect.reg_convs.2.conv',
- 'detect.cls_preds.0',
- 'detect.cls_preds.1',
- 'detect.cls_preds.2',
- 'detect.reg_preds.0',
- 'detect.reg_preds.1',
- 'detect.reg_preds.2',
- ],
-)
-
-qat = dict(
- calib_pt = './assets/yolov6s_v2_reopt_43.1_calib_histogram.pt',
- sensitive_layers_skip = False,
- sensitive_layers_list=['detect.stems.0.conv',
- 'detect.stems.1.conv',
- 'detect.stems.2.conv',
- 'detect.cls_convs.0.conv',
- 'detect.cls_convs.1.conv',
- 'detect.cls_convs.2.conv',
- 'detect.reg_convs.0.conv',
- 'detect.reg_convs.1.conv',
- 'detect.reg_convs.2.conv',
- 'detect.cls_preds.0',
- 'detect.cls_preds.1',
- 'detect.cls_preds.2',
- 'detect.reg_preds.0',
- 'detect.reg_preds.1',
- 'detect.reg_preds.2',
- ],
-)
-
-# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
-training_mode='repopt'
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/README.md b/cv/detection/yolov6/pytorch/configs/yolov6_lite/README.md
deleted file mode 100644
index 170d12d9219db6bb7c5365e8b163d7e60a1734ea..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/README.md
+++ /dev/null
@@ -1,22 +0,0 @@
-## YOLOv6Lite model
-
-English | [简体中文](./README_cn.md)
-
-## Mobile Benchmark
-| Model | Size | mAPval
0.5:0.95 | sm8350
(ms) | mt6853
(ms) | sdm660
(ms) |Params
(M) | FLOPs
(G) |
-| :----------------------------------------------------------- | ---- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
-| [**YOLOv6Lite-S**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_s.pt) | 320*320 | 22.4 | 7.99 | 11.99 | 41.86 | 0.55 | 0.56 |
-| [**YOLOv6Lite-M**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_m.pt) | 320*320 | 25.1 | 9.08 | 13.27 | 47.95 | 0.79 | 0.67 |
-| [**YOLOv6Lite-L**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_l.pt) | 320*320 | 28.0 | 11.37 | 16.20 | 61.40 | 1.09 | 0.87 |
-| [**YOLOv6Lite-L**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_l.pt) | 320*192 | 25.0 | 7.02 | 9.66 | 36.13 | 1.09 | 0.52 |
-| [**YOLOv6Lite-L**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_l.pt) | 224*128 | 18.9 | 3.63 | 4.99 | 17.76 | 1.09 | 0.24 |
-
-
-Table Notes
-
-- From the perspective of model size and input image ratio, we have built a series of models on the mobile terminal to facilitate flexible applications in different scenarios.
-- All checkpoints are trained with 400 epochs without distillation.
-- Results of the mAP and speed are evaluated on [COCO val2017](https://cocodataset.org/#download) dataset, and the input resolution is the Size in the table.
-- Speed is tested on MNN 2.3.0 AArch64 with 2 threads by arm82 acceleration. The inference warm-up is performed 10 times, and the cycle is performed 100 times.
-- Qualcomm 888(sm8350), Dimensity 720(mt6853) and Qualcomm 660(sdm660) correspond to chips with different performances at the high, middle and low end respectively, which can be used as a reference for model capabilities under different chips.
-- Refer to [Test NCNN Speed](./docs/Test_NCNN_speed.md) tutorial to reproduce the NCNN speed results of YOLOv6Lite.
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/README_cn.md b/cv/detection/yolov6/pytorch/configs/yolov6_lite/README_cn.md
deleted file mode 100644
index 23dd715e1387947139e109d95f0e2563b942683b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/README_cn.md
+++ /dev/null
@@ -1,23 +0,0 @@
-## YOLOv6 轻量级模型
-
-简体中文 | [English](./README.md)
-
-## 移动端模型指标
-
-| 模型 | 输入尺寸 | mAPval
0.5:0.95 | sm8350
(ms) | mt6853
(ms) | sdm660
(ms) |Params
(M) | FLOPs
(G) |
-| :----------------------------------------------------------- | ---- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
-| [**YOLOv6Lite-S**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_s.pt) | 320*320 | 22.4 | 7.99 | 11.99 | 41.86 | 0.55 | 0.56 |
-| [**YOLOv6Lite-M**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_m.pt) | 320*320 | 25.1 | 9.08 | 13.27 | 47.95 | 0.79 | 0.67 |
-| [**YOLOv6Lite-L**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_l.pt) | 320*320 | 28.0 | 11.37 | 16.20 | 61.40 | 1.09 | 0.87 |
-| [**YOLOv6Lite-L**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_l.pt) | 320*192 | 25.0 | 7.02 | 9.66 | 36.13 | 1.09 | 0.52 |
-| [**YOLOv6Lite-L**](https://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6lite_l.pt) | 224*128 | 18.9 | 3.63 | 4.99 | 17.76 | 1.09 | 0.24 |
-
-
-表格笔记
-
-- 从模型尺寸和输入图片比例两种角度,在构建了移动端系列模型,方便不同场景下的灵活应用。
-- 所有权重都经过 400 个 epoch 的训练,并且没有使用蒸馏技术。
-- mAP 和速度指标是在 COCO val2017 数据集上评估的,输入分辨率为表格中对应展示的。
-- 使用 MNN 2.3.0 AArch64 进行速度测试。测速时,采用2个线程,并开启arm82加速,推理预热10次,循环100次。
-- 高通888(sm8350)、天玑720(mt6853)和高通660(sdm660)分别对应高中低端不同性能的芯片,可以作为不同芯片下机型能力的参考。
-- [NCNN 速度测试](./docs/Test_NCNN_speed.md)教程可以帮助展示及复现 YOLOv6Lite 的 NCNN 速度结果。
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_l.py b/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_l.py
deleted file mode 100644
index 212c8c73bc79c06c40ddf090289a2810fecb3562..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_l.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# YOLOv6-lite-l model
-model = dict(
- type='YOLOv6-lite-l',
- pretrained=None,
- width_multiple=1.5,
- backbone=dict(
- type='Lite_EffiBackbone',
- num_repeats=[1, 3, 7, 3],
- out_channels=[24, 32, 64, 128, 256],
- scale_size=0.5,
- ),
- neck=dict(
- type='Lite_EffiNeck',
- in_channels=[256, 128, 64],
- unified_channels=96
- ),
- head=dict(
- type='Lite_EffideHead',
- in_channels=[96, 96, 96, 96],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.1 * 4,
- lrf=0.01,
- momentum=0.9,
- weight_decay=0.00004,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_l_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_l_finetune.py
deleted file mode 100644
index 6effa765e3cae611dfde03677ce859498157fa36..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_l_finetune.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# YOLOv6-lite-l model
-model = dict(
- type='YOLOv6-lite-l',
- pretrained='weights/yolov6_lite_l.pt',
- width_multiple=1.5,
- backbone=dict(
- type='Lite_EffiBackbone',
- num_repeats=[1, 3, 7, 3],
- out_channels=[24, 32, 64, 128, 256],
- scale_size=0.5,
- ),
- neck=dict(
- type='Lite_EffiNeck',
- in_channels=[256, 128, 64],
- unified_channels=96
- ),
- head=dict(
- type='Lite_EffideHead',
- in_channels=[96, 96, 96, 96],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_m.py b/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_m.py
deleted file mode 100644
index 8f0de368d20ba43cec17b5f131b9a9031fd7263b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_m.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# YOLOv6-lite-m model
-model = dict(
- type='YOLOv6-lite-m',
- pretrained=None,
- width_multiple=1.1,
- backbone=dict(
- type='Lite_EffiBackbone',
- num_repeats=[1, 3, 7, 3],
- out_channels=[24, 32, 64, 128, 256],
- scale_size=0.5,
- ),
- neck=dict(
- type='Lite_EffiNeck',
- in_channels=[256, 128, 64],
- unified_channels=96
- ),
- head=dict(
- type='Lite_EffideHead',
- in_channels=[96, 96, 96, 96],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.1 * 4,
- lrf=0.01,
- momentum=0.9,
- weight_decay=0.00004,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_m_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_m_finetune.py
deleted file mode 100644
index 09fcd5c5fb16079e10bbe86b64ca2cd0e2df16f3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_m_finetune.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# YOLOv6-lite-m model
-model = dict(
- type='YOLOv6-lite-m',
- pretrained='weights/yolov6_lite_m.pt',
- width_multiple=1.1,
- backbone=dict(
- type='Lite_EffiBackbone',
- num_repeats=[1, 3, 7, 3],
- out_channels=[24, 32, 64, 128, 256],
- scale_size=0.5,
- ),
- neck=dict(
- type='Lite_EffiNeck',
- in_channels=[256, 128, 64],
- unified_channels=96
- ),
- head=dict(
- type='Lite_EffideHead',
- in_channels=[96, 96, 96, 96],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_s.py b/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_s.py
deleted file mode 100644
index 42a52e373b871ba33b877fd8878b732bd73efaa5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_s.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# YOLOv6-lite-s model
-model = dict(
- type='YOLOv6-lite-s',
- pretrained=None,
- width_multiple=0.7,
- backbone=dict(
- type='Lite_EffiBackbone',
- num_repeats=[1, 3, 7, 3],
- out_channels=[24, 32, 64, 128, 256],
- scale_size=0.5,
- ),
- neck=dict(
- type='Lite_EffiNeck',
- in_channels=[256, 128, 64],
- unified_channels=96
- ),
- head=dict(
- type='Lite_EffideHead',
- in_channels=[96, 96, 96, 96],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.1 * 4,
- lrf=0.01,
- momentum=0.9,
- weight_decay=0.00004,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_s_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_s_finetune.py
deleted file mode 100644
index 967e167664f014084ca67db4a480e28f1a35544b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6_lite/yolov6_lite_s_finetune.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# YOLOv6-lite-s model
-model = dict(
- type='YOLOv6-lite-s',
- pretrained='weights/yolov6_lite_s.pt',
- width_multiple=0.7,
- backbone=dict(
- type='Lite_EffiBackbone',
- num_repeats=[1, 3, 7, 3],
- out_channels=[24, 32, 64, 128, 256],
- scale_size=0.5,
- ),
- neck=dict(
- type='Lite_EffiNeck',
- in_channels=[256, 128, 64],
- unified_channels=96
- ),
- head=dict(
- type='Lite_EffideHead',
- in_channels=[96, 96, 96, 96],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6l.py b/cv/detection/yolov6/pytorch/configs/yolov6l.py
deleted file mode 100644
index bfa6728b523d6797022be59146d60d2c9da5b4fd..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6l.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6l',
- pretrained=None,
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
-training_mode = "conv_silu"
-# use normal conv to speed up training and further improve accuracy.
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6l6.py b/cv/detection/yolov6/pytorch/configs/yolov6l6.py
deleted file mode 100644
index 3bb77c5f56bf97c1aae53d409359b49c5678789e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6l6.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# YOLOv6l6 model
-model = dict(
- type='YOLOv6l6',
- pretrained=None,
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone_P6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck_P6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.2,
-)
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6l6_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6l6_finetune.py
deleted file mode 100644
index 2ffb8ada8949b6316d1fb09eca533849df1d9c47..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6l6_finetune.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# YOLOv6l6 model
-model = dict(
- type='YOLOv6l6',
- pretrained='weights/yolov6l6.pt',
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone_P6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck_P6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-training_mode = "conv_silu"
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6l_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6l_finetune.py
deleted file mode 100644
index 9b3012338ea6d3d0d82198844561b095046dc4f5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6l_finetune.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# YOLOv6l model
-model = dict(
- type='YOLOv6l',
- pretrained='weights/yolov6l.pt',
- depth_multiple=1.0,
- width_multiple=1.0,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(1)/2,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(1)/2,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 2.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
-training_mode = "conv_silu"
-# use normal conv to speed up training and further improve accuracy.
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6m.py b/cv/detection/yolov6/pytorch/configs/yolov6m.py
deleted file mode 100644
index 29fae396ead511df95c0d54ac207e57e524a3ba7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6m.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# YOLOv6m model
-model = dict(
- type='YOLOv6m',
- pretrained=None,
- depth_multiple=0.60,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(2)/3,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(2)/3,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 0.8,
- 'dfl': 1.0,
- },
- )
-)
-
-solver=dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6m6.py b/cv/detection/yolov6/pytorch/configs/yolov6m6.py
deleted file mode 100644
index e741bbc03a873579ce414cd5e91950d96548b732..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6m6.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# YOLOv6m6 model
-model = dict(
- type='YOLOv6m6',
- pretrained=None,
- depth_multiple=0.60,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone_P6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- csp_e=float(2)/3,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck_P6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- csp_e=float(2)/3,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.9,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.1,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6m6_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6m6_finetune.py
deleted file mode 100644
index 83760d3a1def97f5503da09e7f4df0514e51c3e9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6m6_finetune.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# YOLOv6m6 model
-model = dict(
- type='YOLOv6m6',
- pretrained='weights/yolov6m6.pt',
- depth_multiple=0.60,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone_P6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- csp_e=float(2)/3,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck_P6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- csp_e=float(2)/3,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6m_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6m_finetune.py
deleted file mode 100644
index cfe0fa9358fe81580c5fcd4664b9a3dbe1cd544b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6m_finetune.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# YOLOv6m model
-model = dict(
- type='YOLOv6m',
- pretrained='weights/yolov6m.pt',
- depth_multiple=0.60,
- width_multiple=0.75,
- backbone=dict(
- type='CSPBepBackbone',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- csp_e=float(2)/3,
- fuse_P2=True,
- ),
- neck=dict(
- type='CSPRepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- csp_e=float(2)/3,
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=True,
- reg_max=16, #if use_dfl is False, please set reg_max to 0
- distill_weight={
- 'class': 0.8,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6n.py b/cv/detection/yolov6/pytorch/configs/yolov6n.py
deleted file mode 100644
index 74f9386d791f9b62786125c761f6b1cd31c73ac5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6n.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.02,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6n6.py b/cv/detection/yolov6/pytorch/configs/yolov6n6.py
deleted file mode 100644
index 0abe3a44d5b3d33333b1d595ecc0b13518352ed6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6n6.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n6',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- fuse_P2=True, # if use RepBiFPANNeck6, please set fuse_P2 to True.
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.02,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6n6_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6n6_finetune.py
deleted file mode 100644
index 01100f0f63a12269c509fd5820c2b9ba3c6fe258..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6n6_finetune.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6n6',
- pretrained='weights/yolov6n6.pt',
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- fuse_P2=True, # if use RepBiFPANNeck6, please set fuse_P2 to True.
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='siou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6n_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6n_finetune.py
deleted file mode 100644
index 03b6d1baaba363d88f39e8417beeb1336d956b6e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6n_finetune.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6n',
- pretrained='weights/yolov6n.pt',
- depth_multiple=0.33,
- width_multiple=0.25,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='siou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6s.py b/cv/detection/yolov6/pytorch/configs/yolov6s.py
deleted file mode 100644
index 8d8b6739cd012e597fe1c5a2b6d104fedd2ab10a..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6s.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6s',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6s6.py b/cv/detection/yolov6/pytorch/configs/yolov6s6.py
deleted file mode 100644
index 091bfffca5e835d384e4cd138d445bae5fad0f80..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6s6.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6s6',
- pretrained=None,
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- fuse_P2=True, # if use RepBiFPANNeck6, please set fuse_P2 to True.
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='giou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.01,
- lrf=0.01,
- momentum=0.937,
- weight_decay=0.0005,
- warmup_epochs=3.0,
- warmup_momentum=0.8,
- warmup_bias_lr=0.1
-)
-
-data_aug = dict(
- hsv_h=0.015,
- hsv_s=0.7,
- hsv_v=0.4,
- degrees=0.0,
- translate=0.1,
- scale=0.5,
- shear=0.0,
- flipud=0.0,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.0,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6s6_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6s6_finetune.py
deleted file mode 100644
index a22697ed384fd85ea808a88bb295d7735d649e9f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6s6_finetune.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# YOLOv6n model
-model = dict(
- type='YOLOv6s6',
- pretrained='weights/yolov6s6.pt',
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep6',
- num_repeats=[1, 6, 12, 18, 6, 6],
- out_channels=[64, 128, 256, 512, 768, 1024],
- fuse_P2=True, # if use RepBiFPANNeck6, please set fuse_P2 to True.
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck6',
- num_repeats=[12, 12, 12, 12, 12, 12],
- out_channels=[512, 256, 128, 256, 512, 1024],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512, 1024],
- num_layers=4,
- anchors=1,
- strides=[8, 16, 32, 64],
- atss_warmup_epoch=4,
- iou_type='giou',
- use_dfl=False,
- reg_max=0 #if use_dfl is False, please set reg_max to 0
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/configs/yolov6s_finetune.py b/cv/detection/yolov6/pytorch/configs/yolov6s_finetune.py
deleted file mode 100644
index d6fb27fe8adc35f9a8a0307831365231eb6f83df..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/configs/yolov6s_finetune.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# YOLOv6s model
-model = dict(
- type='YOLOv6s',
- pretrained='weights/yolov6s.pt',
- depth_multiple=0.33,
- width_multiple=0.50,
- backbone=dict(
- type='EfficientRep',
- num_repeats=[1, 6, 12, 18, 6],
- out_channels=[64, 128, 256, 512, 1024],
- fuse_P2=True,
- cspsppf=True,
- ),
- neck=dict(
- type='RepBiFPANNeck',
- num_repeats=[12, 12, 12, 12],
- out_channels=[256, 128, 128, 256, 256, 512],
- ),
- head=dict(
- type='EffiDeHead',
- in_channels=[128, 256, 512],
- num_layers=3,
- begin_indices=24,
- anchors=3,
- anchors_init=[[10,13, 19,19, 33,23],
- [30,61, 59,59, 59,119],
- [116,90, 185,185, 373,326]],
- out_indices=[17, 20, 23],
- strides=[8, 16, 32],
- atss_warmup_epoch=0,
- iou_type='giou',
- use_dfl=False, # set to True if you want to further train with distillation
- reg_max=0, # set to 16 if you want to further train with distillation
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- },
- )
-)
-
-solver = dict(
- optim='SGD',
- lr_scheduler='Cosine',
- lr0=0.0032,
- lrf=0.12,
- momentum=0.843,
- weight_decay=0.00036,
- warmup_epochs=2.0,
- warmup_momentum=0.5,
- warmup_bias_lr=0.05
-)
-
-data_aug = dict(
- hsv_h=0.0138,
- hsv_s=0.664,
- hsv_v=0.464,
- degrees=0.373,
- translate=0.245,
- scale=0.898,
- shear=0.602,
- flipud=0.00856,
- fliplr=0.5,
- mosaic=1.0,
- mixup=0.243,
-)
diff --git a/cv/detection/yolov6/pytorch/data/coco.yaml b/cv/detection/yolov6/pytorch/data/coco.yaml
deleted file mode 100644
index ff88acbebc79bc13da69db385e58ca8cfd869b6d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/data/coco.yaml
+++ /dev/null
@@ -1,21 +0,0 @@
-# COCO 2017 dataset http://cocodataset.org
-train: ./coco/images/train2017 # 118287 images
-val: ./coco/images/val2017 # 5000 images
-test: ./coco/images/val2017
-anno_path: ./coco/annotations/instances_val2017.json
-
-# number of classes
-nc: 80
-# whether it is coco dataset, only coco dataset should be set to True.
-is_coco: True
-
-# class names
-names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
- 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
- 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
- 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
- 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
- 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
- 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
- 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
- 'hair drier', 'toothbrush' ]
diff --git a/cv/detection/yolov6/pytorch/data/dataset.yaml b/cv/detection/yolov6/pytorch/data/dataset.yaml
deleted file mode 100644
index 6e02692159c7faab5b4e3d9140c99f4878005b64..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/data/dataset.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-# Please insure that your custom_dataset are put in same parent dir with YOLOv6_DIR
-train: ../custom_dataset/images/train # train images
-val: ../custom_dataset/images/val # val images
-test: ../custom_dataset/images/test # test images (optional)
-
-# whether it is coco dataset, only coco dataset should be set to True.
-is_coco: False
-# Classes
-nc: 20 # number of classes
-names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
- 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
diff --git a/cv/detection/yolov6/pytorch/data/voc.yaml b/cv/detection/yolov6/pytorch/data/voc.yaml
deleted file mode 100644
index d6aa6a622d97c2fc81e7779ef16387f7fd03b0f1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/data/voc.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-# Please insure that your custom_dataset are put in same parent dir with YOLOv6_DIR
-train: VOCdevkit/voc_07_12/images/train # train images
-val: VOCdevkit/voc_07_12/images/val # val images
-test: VOCdevkit/voc_07_12/images/val # test images (optional)
-
-# whether it is coco dataset, only coco dataset should be set to True.
-is_coco: False
-# Classes
-nc: 20 # number of classes
-names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
- 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
diff --git a/cv/detection/yolov6/pytorch/hubconf.py b/cv/detection/yolov6/pytorch/hubconf.py
deleted file mode 100644
index 13ec92ab296bfb1d23bc48485345615194dca4a4..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/hubconf.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import os
-import cv2
-import math
-import pathlib
-import torch
-import numpy as np
-from PIL import Image
-import matplotlib.pyplot as plt
-
-from yolov6.layers.common import DetectBackend
-from yolov6.utils.nms import non_max_suppression
-from yolov6.data.data_augment import letterbox
-from yolov6.core.inferer import Inferer
-from yolov6.utils.events import LOGGER
-from yolov6.utils.events import load_yaml
-
-PATH_YOLOv6 = pathlib.Path(__file__).parent
-DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-CLASS_NAMES = load_yaml(str(PATH_YOLOv6/"data/coco.yaml"))['names']
-
-
-def visualize_detections(image,
- boxes,
- classes,
- scores,
- min_score=0.4,
- figsize=(16, 16),
- linewidth=2,
- color='lawngreen'
- ):
- image = np.array(image, dtype=np.uint8)
- fig = plt.figure(figsize=figsize)
- plt.axis("off")
- plt.imshow(image)
- ax = plt.gca()
- for box, name, score in zip(boxes, classes, scores):
- if score >= min_score:
- text = "{}: {:.2f}".format(name, score)
- x1, y1, x2, y2 = box
- w, h = x2 - x1, y2 - y1
- patch = plt.Rectangle(
- [x1, y1], w, h, fill=False, edgecolor=color, linewidth=linewidth
- )
- ax.add_patch(patch)
- ax.text(
- x1,
- y1,
- text,
- bbox={"facecolor": color, "alpha": 0.8},
- clip_box=ax.clipbox,
- clip_on=True,
- )
- plt.show()
-
-
-def check_img_size(img_size, s=32, floor=0):
- def make_divisible(x, divisor):
- return math.ceil(x / divisor) * divisor
- if isinstance(img_size, int): # integer i.e. img_size=640
- new_size = max(make_divisible(img_size, int(s)), floor)
- elif isinstance(img_size, list): # list i.e. img_size=[640, 480]
- new_size = [max(make_divisible(x, int(s)), floor) for x in img_size]
- else:
- raise Exception(f"Unsupported type of img_size: {type(img_size)}")
-
- if new_size != img_size:
- LOGGER.info(
- f'WARNING: --img-size {img_size} must be multiple of max stride {s}, updating to {new_size}')
- return new_size if isinstance(img_size, list) else [new_size] * 2
-
-
-def process_image(path, img_size, stride):
- '''Preprocess image before inference.'''
- try:
- img_src = cv2.imread(path)
- img_src = cv2.cvtColor(img_src, cv2.COLOR_RGB2BGR)
- assert img_src is not None, f"opencv cannot read image correctly or {path} not exists"
- except:
- img_src = np.asarray(Image.open(path))
- assert img_src is not None, f"Image Not Found {path}, workdir: {os.getcwd()}"
-
- image = letterbox(img_src, img_size, stride=stride)[0]
- image = image.transpose((2, 0, 1)) # HWC to CHW
- image = torch.from_numpy(np.ascontiguousarray(image))
- image = image.float()
- image /= 255
- return image, img_src
-
-
-class Detector(DetectBackend):
- def __init__(self,
- ckpt_path,
- class_names,
- device,
- img_size=640,
- conf_thres=0.25,
- iou_thres=0.45,
- max_det=1000):
- super().__init__(ckpt_path, device)
- self.class_names = class_names
- self.model.float()
- self.device = device
- self.img_size = check_img_size(img_size)
- self.conf_thres = conf_thres
- self.iou_thres = iou_thres
- self.max_det = max_det
-
- def forward(self, x, src_shape):
- pred_results = super().forward(x)
- classes = None # the classes to keep
- det = non_max_suppression(pred_results, self.conf_thres, self.iou_thres,
- classes, agnostic=False, max_det=self.max_det)[0]
-
- det[:, :4] = Inferer.rescale(
- x.shape[2:], det[:, :4], src_shape).round()
- boxes = det[:, :4]
- scores = det[:, 4]
- labels = det[:, 5].long()
- prediction = {'boxes': boxes, 'scores': scores, 'labels': labels}
- return prediction
-
- def predict(self, img_path):
- img, img_src = process_image(img_path, self.img_size, 32)
- img = img.to(self.device)
- if len(img.shape) == 3:
- img = img[None]
-
- prediction = self.forward(img, img_src.shape)
- out = {k: v.cpu().numpy() for k, v in prediction.items()}
- out['classes'] = [self.class_names[i] for i in out['labels']]
- return out
-
- def show_predict(self,
- img_path,
- min_score=0.5,
- figsize=(16, 16),
- color='lawngreen',
- linewidth=2):
- prediction = self.predict(img_path)
- boxes, scores, classes = prediction['boxes'], prediction['scores'], prediction['classes']
- visualize_detections(Image.open(img_path),
- boxes, classes, scores,
- min_score=min_score, figsize=figsize, color=color, linewidth=linewidth
- )
-
-
-def create_model(model_name, class_names=CLASS_NAMES, device=DEVICE,
- img_size=640, conf_thres=0.25, iou_thres=0.45, max_det=1000):
- if not os.path.exists(str(PATH_YOLOv6/'weights')):
- os.mkdir(str(PATH_YOLOv6/'weights'))
- if not os.path.exists(str(PATH_YOLOv6/'weights') + f'/{model_name}.pt'):
- torch.hub.load_state_dict_from_url(
- f"https://github.com/meituan/YOLOv6/releases/download/0.3.0/{model_name}.pt",
- str(PATH_YOLOv6/'weights'))
- return Detector(str(PATH_YOLOv6/'weights') + f'/{model_name}.pt',
- class_names, device, img_size=img_size, conf_thres=conf_thres,
- iou_thres=iou_thres, max_det=max_det)
-
-
-def yolov6n(class_names=CLASS_NAMES, device=DEVICE, img_size=640, conf_thres=0.25, iou_thres=0.45, max_det=1000):
- return create_model('yolov6n', class_names, device, img_size=img_size, conf_thres=conf_thres,
- iou_thres=iou_thres, max_det=max_det)
-
-
-def yolov6s(class_names=CLASS_NAMES, device=DEVICE, img_size=640, conf_thres=0.25, iou_thres=0.45, max_det=1000):
- return create_model('yolov6s', class_names, device, img_size=img_size, conf_thres=conf_thres,
- iou_thres=iou_thres, max_det=max_det)
-
-
-def yolov6m(class_names=CLASS_NAMES, device=DEVICE, img_size=640, conf_thres=0.25, iou_thres=0.45, max_det=1000):
- return create_model('yolov6m', class_names, device, img_size=img_size, conf_thres=conf_thres,
- iou_thres=iou_thres, max_det=max_det)
-
-
-def yolov6l(class_names=CLASS_NAMES, device=DEVICE, img_size=640, conf_thres=0.25, iou_thres=0.45, max_det=1000):
- return create_model('yolov6l', class_names, device, img_size=img_size, conf_thres=conf_thres,
- iou_thres=iou_thres, max_det=max_det)
-
-
-def custom(ckpt_path, class_names, device=DEVICE, img_size=640, conf_thres=0.25, iou_thres=0.45, max_det=1000):
- return Detector(ckpt_path, class_names, device, img_size=img_size, conf_thres=conf_thres,
- iou_thres=iou_thres, max_det=max_det)
diff --git a/cv/detection/yolov6/pytorch/requirements.txt b/cv/detection/yolov6/pytorch/requirements.txt
deleted file mode 100644
index 046e8145b5373635e199b861d243ddd19e97facc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/requirements.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-# torch>=1.8.0
-# torchvision>=0.9.0
-numpy>=1.19.5
-opencv-python>=4.1.2
-PyYAML>=5.3.1
-scipy>=1.4.1
-tqdm>=4.41.0
-addict>=2.4.0
-tensorboard>=2.7.0
-pycocotools>=2.0
-# onnx>=1.10.0 # ONNX export
-# onnx-simplifier>=0.3.6 # ONNX simplifier
-thop # FLOPs computation
-# pytorch_quantization>=2.1.1
diff --git a/cv/detection/yolov6/pytorch/tools/eval.py b/cv/detection/yolov6/pytorch/tools/eval.py
deleted file mode 100644
index 5543029c1b142fd197279992e070c794c610ff4b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/eval.py
+++ /dev/null
@@ -1,169 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import argparse
-import os
-import os.path as osp
-import sys
-import torch
-
-ROOT = os.getcwd()
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT))
-
-from yolov6.core.evaler import Evaler
-from yolov6.utils.events import LOGGER
-from yolov6.utils.general import increment_name, check_img_size
-from yolov6.utils.config import Config
-
-def boolean_string(s):
- if s not in {'False', 'True'}:
- raise ValueError('Not a valid boolean string')
- return s == 'True'
-
-def get_args_parser(add_help=True):
- parser = argparse.ArgumentParser(description='YOLOv6 PyTorch Evalating', add_help=add_help)
- parser.add_argument('--data', type=str, default='./data/coco.yaml', help='dataset.yaml path')
- parser.add_argument('--weights', type=str, default='./weights/yolov6s.pt', help='model.pt path(s)')
- parser.add_argument('--batch-size', type=int, default=32, help='batch size')
- parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
- parser.add_argument('--conf-thres', type=float, default=0.03, help='confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.65, help='NMS IoU threshold')
- parser.add_argument('--task', default='val', help='val, test, or speed')
- parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--half', default=False, action='store_true', help='whether to use fp16 infer')
- parser.add_argument('--save_dir', type=str, default='runs/val/', help='evaluation save dir')
- parser.add_argument('--name', type=str, default='exp', help='save evaluation results to save_dir/name')
- parser.add_argument('--shrink_size', type=int, default=0, help='load img resize when test')
- parser.add_argument('--infer_on_rect', default=True, type=boolean_string, help='default to run with rectangle image to boost speed.')
- parser.add_argument('--reproduce_640_eval', default=False, action='store_true', help='whether to reproduce 640 infer result, overwrite some config')
- parser.add_argument('--eval_config_file', type=str, default='./configs/experiment/eval_640_repro.py', help='config file for repro 640 infer result')
- parser.add_argument('--do_coco_metric', default=True, type=boolean_string, help='whether to use pycocotool to metric, set False to close')
- parser.add_argument('--do_pr_metric', default=False, type=boolean_string, help='whether to calculate precision, recall and F1, n, set False to close')
- parser.add_argument('--plot_curve', default=True, type=boolean_string, help='whether to save plots in savedir when do pr metric, set False to close')
- parser.add_argument('--plot_confusion_matrix', default=False, action='store_true', help='whether to save confusion matrix plots when do pr metric, might cause no harm warning print')
- parser.add_argument('--verbose', default=False, action='store_true', help='whether to print metric on each class')
- parser.add_argument('--config-file', default='', type=str, help='experiments description file, lower priority than reproduce_640_eval')
- parser.add_argument('--specific-shape', action='store_true', help='rectangular training')
- parser.add_argument('--height', type=int, default=None, help='image height of model input')
- parser.add_argument('--width', type=int, default=None, help='image width of model input')
- args = parser.parse_args()
-
- if args.config_file:
- assert os.path.exists(args.config_file), print("Config file {} does not exist".format(args.config_file))
- cfg = Config.fromfile(args.config_file)
- if not hasattr(cfg, 'eval_params'):
- LOGGER.info("Config file doesn't has eval params config.")
- else:
- eval_params=cfg.eval_params
- for key, value in eval_params.items():
- if key not in args.__dict__:
- LOGGER.info(f"Unrecognized config {key}, continue")
- continue
- if isinstance(value, list):
- if value[1] is not None:
- args.__dict__[key] = value[1]
- else:
- if value is not None:
- args.__dict__[key] = value
-
- # load params for reproduce 640 eval result
- if args.reproduce_640_eval:
- assert os.path.exists(args.eval_config_file), print("Reproduce config file {} does not exist".format(args.eval_config_file))
- eval_params = Config.fromfile(args.eval_config_file).eval_params
- eval_model_name = os.path.splitext(os.path.basename(args.weights))[0]
- if eval_model_name not in eval_params:
- eval_model_name = "default"
- args.shrink_size = eval_params[eval_model_name]["shrink_size"]
- args.infer_on_rect = eval_params[eval_model_name]["infer_on_rect"]
- #force params
- #args.img_size = 640
- args.conf_thres = 0.03
- args.iou_thres = 0.65
- args.task = "val"
- args.do_coco_metric = True
-
- LOGGER.info(args)
- return args
-
-
-@torch.no_grad()
-def run(data,
- weights=None,
- batch_size=32,
- img_size=640,
- conf_thres=0.03,
- iou_thres=0.65,
- task='val',
- device='',
- half=False,
- model=None,
- dataloader=None,
- save_dir='',
- name = '',
- shrink_size=640,
- letterbox_return_int=False,
- infer_on_rect=False,
- reproduce_640_eval=False,
- eval_config_file='./configs/experiment/eval_640_repro.py',
- verbose=False,
- do_coco_metric=True,
- do_pr_metric=False,
- plot_curve=False,
- plot_confusion_matrix=False,
- config_file=None,
- specific_shape=False,
- height=640,
- width=640
- ):
- """ Run the evaluation process
-
- This function is the main process of evaluation, supporting image file and dir containing images.
- It has tasks of 'val', 'train' and 'speed'. Task 'train' processes the evaluation during training phase.
- Task 'val' processes the evaluation purely and return the mAP of model.pt. Task 'speed' processes the
- evaluation of inference speed of model.pt.
-
- """
-
- # task
- Evaler.check_task(task)
- if task == 'train':
- save_dir = save_dir
- else:
- save_dir = str(increment_name(osp.join(save_dir, name)))
- os.makedirs(save_dir, exist_ok=True)
-
- # check the threshold value, reload device/half/data according task
- Evaler.check_thres(conf_thres, iou_thres, task)
- device = Evaler.reload_device(device, model, task)
- half = device.type != 'cpu' and half
- data = Evaler.reload_dataset(data, task) if isinstance(data, str) else data
-
- # # verify imgsz is gs-multiple
- if specific_shape:
- height = check_img_size(height, 32, floor=256)
- width = check_img_size(width, 32, floor=256)
- else:
- img_size = check_img_size(img_size, 32, floor=256)
- val = Evaler(data, batch_size, img_size, conf_thres, \
- iou_thres, device, half, save_dir, \
- shrink_size, infer_on_rect,
- verbose, do_coco_metric, do_pr_metric,
- plot_curve, plot_confusion_matrix,
- specific_shape=specific_shape,height=height, width=width)
- model = val.init_model(model, weights, task)
- dataloader = val.init_data(dataloader, task)
-
- # eval
- model.eval()
- pred_result, vis_outputs, vis_paths = val.predict_model(model, dataloader, task)
- eval_result = val.eval_model(pred_result, model, dataloader, task)
- return eval_result, vis_outputs, vis_paths
-
-
-def main(args):
- run(**vars(args))
-
-
-if __name__ == "__main__":
- args = get_args_parser()
- main(args)
diff --git a/cv/detection/yolov6/pytorch/tools/infer.py b/cv/detection/yolov6/pytorch/tools/infer.py
deleted file mode 100644
index 95b3fdc7f5baea0c62f2f9b3e287884ab1c59918..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/infer.py
+++ /dev/null
@@ -1,120 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import argparse
-import os
-import sys
-import os.path as osp
-
-import torch
-
-ROOT = os.getcwd()
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT))
-
-from yolov6.utils.events import LOGGER
-from yolov6.core.inferer import Inferer
-
-
-def get_args_parser(add_help=True):
- parser = argparse.ArgumentParser(description='YOLOv6 PyTorch Inference.', add_help=add_help)
- parser.add_argument('--weights', type=str, default='weights/yolov6s.pt', help='model path(s) for inference.')
- parser.add_argument('--source', type=str, default='data/images', help='the source path, e.g. image-file/dir.')
- parser.add_argument('--webcam', action='store_true', help='whether to use webcam.')
- parser.add_argument('--webcam-addr', type=str, default='0', help='the web camera address, local camera or rtsp address.')
- parser.add_argument('--yaml', type=str, default='data/coco.yaml', help='data yaml file.')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='the image-size(h,w) in inference size.')
- parser.add_argument('--conf-thres', type=float, default=0.4, help='confidence threshold for inference.')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold for inference.')
- parser.add_argument('--max-det', type=int, default=1000, help='maximal inferences per image.')
- parser.add_argument('--device', default='0', help='device to run our model i.e. 0 or 0,1,2,3 or cpu.')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt.')
- parser.add_argument('--not-save-img', action='store_true', help='do not save visuallized inference results.')
- parser.add_argument('--save-dir', type=str, help='directory to save predictions in. See --save-txt.')
- parser.add_argument('--view-img', action='store_true', help='show inference results')
- parser.add_argument('--classes', nargs='+', type=int, help='filter by classes, e.g. --classes 0, or --classes 0 2 3.')
- parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS.')
- parser.add_argument('--project', default='runs/inference', help='save inference results to project/name.')
- parser.add_argument('--name', default='exp', help='save inference results to project/name.')
- parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels.')
- parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences.')
- parser.add_argument('--half', action='store_true', help='whether to use FP16 half-precision inference.')
-
- args = parser.parse_args()
- LOGGER.info(args)
- return args
-
-
-@torch.no_grad()
-def run(weights=osp.join(ROOT, 'yolov6s.pt'),
- source=osp.join(ROOT, 'data/images'),
- webcam=False,
- webcam_addr=0,
- yaml=None,
- img_size=640,
- conf_thres=0.4,
- iou_thres=0.45,
- max_det=1000,
- device='',
- save_txt=False,
- not_save_img=False,
- save_dir=None,
- view_img=True,
- classes=None,
- agnostic_nms=False,
- project=osp.join(ROOT, 'runs/inference'),
- name='exp',
- hide_labels=False,
- hide_conf=False,
- half=False,
- ):
- """ Inference process, supporting inference on one image file or directory which containing images.
- Args:
- weights: The path of model.pt, e.g. yolov6s.pt
- source: Source path, supporting image files or dirs containing images.
- yaml: Data yaml file, .
- img_size: Inference image-size, e.g. 640
- conf_thres: Confidence threshold in inference, e.g. 0.25
- iou_thres: NMS IOU threshold in inference, e.g. 0.45
- max_det: Maximal detections per image, e.g. 1000
- device: Cuda device, e.e. 0, or 0,1,2,3 or cpu
- save_txt: Save results to *.txt
- not_save_img: Do not save visualized inference results
- classes: Filter by class: --class 0, or --class 0 2 3
- agnostic_nms: Class-agnostic NMS
- project: Save results to project/name
- name: Save results to project/name, e.g. 'exp'
- line_thickness: Bounding box thickness (pixels), e.g. 3
- hide_labels: Hide labels, e.g. False
- hide_conf: Hide confidences
- half: Use FP16 half-precision inference, e.g. False
- """
- # create save dir
- if save_dir is None:
- save_dir = osp.join(project, name)
- save_txt_path = osp.join(save_dir, 'labels')
- else:
- save_txt_path = save_dir
- if (not not_save_img or save_txt) and not osp.exists(save_dir):
- os.makedirs(save_dir)
- else:
- LOGGER.warning('Save directory already existed')
- if save_txt:
- save_txt_path = osp.join(save_dir, 'labels')
- if not osp.exists(save_txt_path):
- os.makedirs(save_txt_path)
-
- # Inference
- inferer = Inferer(source, webcam, webcam_addr, weights, device, yaml, img_size, half)
- inferer.infer(conf_thres, iou_thres, classes, agnostic_nms, max_det, save_dir, save_txt, not not_save_img, hide_labels, hide_conf, view_img)
-
- if save_txt or not not_save_img:
- LOGGER.info(f"Results saved to {save_dir}")
-
-
-def main(args):
- run(**vars(args))
-
-
-if __name__ == "__main__":
- args = get_args_parser()
- main(args)
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/README.md b/cv/detection/yolov6/pytorch/tools/partial_quantization/README.md
deleted file mode 100644
index 3a15a39dd87c804353e1cc1e748471b2a1cb1330..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# Partial Quantization
-The performance of YOLOv6s heavily degrades from 42.4% to 35.6% after traditional PTQ, which is unacceptable. To resolve this issue, we propose **partial quantization**. First we analyze the quantization sensitivity of all layers, and then we let the most sensitive layers to have full precision as a compromise.
-
-With partial quantization, we finally reach 42.1%, only 0.3% loss in accuracy, while the throughput of the partially quantized model is about 1.56 times that of the FP16 model at a batch size of 32. This method achieves a nice tradeoff between accuracy and throughput.
-
-## Prerequirements
-```python
-pip install --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com nvidia-pyindex
-pip install --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com pytorch_quantization
-```
-## Sensitivity analysis
-
-Please use the following command to perform sensitivity analysis. Since we randomly sample 128 images from train dataset each time, the sensitivity files will be slightly different.
-
-```python
- python3 sensitivity_analyse.py --weights yolov6s_reopt.pt \
- --batch-size 32 \
- --batch-number 4 \
- --data-root train_data_path
-```
-
-## Partial quantization
-
-With the sensitivity file at hand, we then proceed with partial quantization as follows.
-
-```python
-python3 partial_quant.py --weights yolov6s_reopt.pt \
- --calib-weights yolov6s_repot_calib.pt \
- --sensitivity-file yolov6s_reopt_sensivitiy_128_calib.txt \
- --quant-boundary 55 \
- --export-batch-size 1
-```
-
-## Deployment
-
-Build a TRT engine
-
-```python
-trtexec --workspace=1024 --percentile=99 --streams=1 --int8 --fp16 --avgRuns=10 --onnx=yolov6s_reopt_partial_bs1.sim.onnx --saveEngine=yolov6s_reopt_partial_bs1.sim.trt
-```
-
-## Performance
-| Model | Size | Precision |mAPval
0.5:0.95 | SpeedT4
trt b1
(fps) | SpeedT4
trt b32
(fps) |
-| :-------------- | ----------- | ----------- |:----------------------- | ---------------------------------------- | -----------------------------------|
-| [**YOLOv6-s-partial**] [bs1](https://github.com/lippman1125/YOLOv6/releases/download/0.1.0/yolov6s_reopt_partial_bs1.sim.onnx)
[bs32](https://github.com/lippman1125/YOLOv6/releases/download/0.1.0/yolov6s_reopt_partial_bs32.sim.onnx)
| 640 | INT8 |42.1 | 503 | 811 |
-| [**YOLOv6-s**] | 640 | FP16 |42.4 | 373 | 520 |
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/eval.py b/cv/detection/yolov6/pytorch/tools/partial_quantization/eval.py
deleted file mode 100644
index 8213b945825b2880541a2887fc6dd5b9b078dbb5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/eval.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os
-import torch
-from yolov6.core.evaler import Evaler
-
-class EvalerWrapper(object):
- def __init__(self, eval_cfg):
- task = eval_cfg['task']
- save_dir = eval_cfg['save_dir']
- half = eval_cfg['half']
- data = eval_cfg['data']
- batch_size = eval_cfg['batch_size']
- img_size = eval_cfg['img_size']
- device = eval_cfg['device']
- dataloader = None
-
- Evaler.check_task(task)
- if not os.path.exists(save_dir):
- os.makedirs(save_dir)
-
- # reload thres/device/half/data according task
- conf_thres = 0.03
- iou_thres = 0.65
- device = Evaler.reload_device(device, None, task)
- data = Evaler.reload_dataset(data) if isinstance(data, str) else data
-
- # init
- val = Evaler(data, batch_size, img_size, conf_thres, \
- iou_thres, device, half, save_dir)
- val.stride = eval_cfg['stride']
- dataloader = val.init_data(dataloader, task)
-
- self.eval_cfg = eval_cfg
- self.half = half
- self.device = device
- self.task = task
- self.val = val
- self.val_loader = dataloader
-
- def eval(self, model):
- model.eval()
- model.to(self.device)
- if self.half is True:
- model.half()
-
- with torch.no_grad():
- pred_result, vis_outputs, vis_paths = self.val.predict_model(model, self.val_loader, self.task)
- eval_result = self.val.eval_model(pred_result, model, self.val_loader, self.task)
-
- return eval_result
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/eval.yaml b/cv/detection/yolov6/pytorch/tools/partial_quantization/eval.yaml
deleted file mode 100644
index 3296e8ac504934a780234cc9295465517ae47014..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/eval.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-task: 'val'
-save_dir: 'runs/val/exp'
-half: False
-data: '../../data/coco.yaml'
-batch_size: 32
-img_size: 640
-device: '0'
-stride: 32
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/partial_quant.py b/cv/detection/yolov6/pytorch/tools/partial_quantization/partial_quant.py
deleted file mode 100644
index 6ca59560792351677b61743bef7a5c1945e10a83..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/partial_quant.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import argparse
-import time
-import sys
-import os
-
-ROOT = os.getcwd()
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT))
-
-sys.path.append('../../')
-
-from yolov6.models.effidehead import Detect
-from yolov6.layers.common import *
-from yolov6.utils.events import LOGGER, load_yaml
-from yolov6.utils.checkpoint import load_checkpoint
-
-from tools.partial_quantization.eval import EvalerWrapper
-from tools.partial_quantization.utils import get_module, concat_quant_amax_fuse, quant_sensitivity_load
-from tools.partial_quantization.ptq import load_ptq, partial_quant
-
-from pytorch_quantization import nn as quant_nn
-
-# concat_fusion_list = [
-# ('backbone.ERBlock_5.2.m', 'backbone.ERBlock_5.2.cv2.conv'),
-# ('backbone.ERBlock_5.0.rbr_reparam', 'neck.Rep_p4.conv1.rbr_reparam'),
-# ('backbone.ERBlock_4.0.rbr_reparam', 'neck.Rep_p3.conv1.rbr_reparam'),
-# ('neck.upsample1.upsample_transpose', 'neck.Rep_n3.conv1.rbr_reparam'),
-# ('neck.upsample0.upsample_transpose', 'neck.Rep_n4.conv1.rbr_reparam')
-# ]
-
-op_concat_fusion_list = [
- ('backbone.ERBlock_5.2.m', 'backbone.ERBlock_5.2.cv2.conv'),
- ('backbone.ERBlock_5.0.conv', 'neck.Rep_p4.conv1.conv', 'neck.upsample_feat0_quant'),
- ('backbone.ERBlock_4.0.conv', 'neck.Rep_p3.conv1.conv', 'neck.upsample_feat1_quant'),
- ('neck.upsample1.upsample_transpose', 'neck.Rep_n3.conv1.conv'),
- ('neck.upsample0.upsample_transpose', 'neck.Rep_n4.conv1.conv'),
- #
- ('detect.reg_convs.0.conv', 'detect.cls_convs.0.conv'),
- ('detect.reg_convs.1.conv', 'detect.cls_convs.1.conv'),
- ('detect.reg_convs.2.conv', 'detect.cls_convs.2.conv'),
-]
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='./yolov6s_reopt.pt', help='weights path')
- parser.add_argument('--calib-weights', type=str, default='./yolov6s_reopt_calib.pt', help='calib weights path')
- parser.add_argument('--data-root', type=str, default=None, help='train data path')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width
- parser.add_argument('--conf', type=str, default='../../configs/repopt/yolov6s_opt_qat.py', help='model config')
- parser.add_argument('--export-batch-size', type=int, default=None, help='export batch size')
- parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True')
- parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0, 1, 2, 3 or cpu')
- parser.add_argument('--sensitivity-file', type=str, default=None, help='quantization sensitivity file')
- parser.add_argument('--quant-boundary', type=int, default=None, help='quantization boundary')
- parser.add_argument('--eval-yaml', type=str, default='./eval.yaml', help='evaluation config')
- args = parser.parse_args()
- args.img_size *= 2 if len(args.img_size) == 1 else 1 # expand
- print(args)
- t = time.time()
-
- # Check device
- cuda = args.device != 'cpu' and torch.cuda.is_available()
- device = torch.device('cuda:0' if cuda else 'cpu')
- assert not (device.type == 'cpu' and args.half), '--half only compatible with GPU export, i.e. use --device 0'
- # Load PyTorch model
- model = load_checkpoint(args.weights, map_location=device, inplace=True, fuse=True) # load FP32 model
- model.eval()
- yolov6_evaler = EvalerWrapper(eval_cfg=load_yaml(args.eval_yaml))
- orig_mAP = yolov6_evaler.eval(model)
-
- for layer in model.modules():
- if isinstance(layer, RepVGGBlock):
- layer.switch_to_deploy()
-
- for k, m in model.named_modules():
- if isinstance(m, Conv): # assign export-friendly activations
- if isinstance(m.act, nn.SiLU):
- m.act = SiLU()
- elif isinstance(m, Detect):
- m.inplace = args.inplace
-
- model_ptq = load_ptq(model, args.calib_weights, device)
-
- quant_sensitivity = quant_sensitivity_load(args.sensitivity_file)
- quant_sensitivity.sort(key=lambda tup: tup[2], reverse=True)
- boundary = args.quant_boundary
- quantable_ops = [qops[0] for qops in quant_sensitivity[:boundary+1]]
- # only quantize ops in quantable_ops list
- partial_quant(model_ptq, quantable_ops=quantable_ops)
- # concat amax fusion
- for sub_fusion_list in opt_concat_fusion_list:
- ops = [get_module(model_ptq, op_name) for op_name in sub_fusion_list]
- concat_quant_amax_fuse(ops)
-
- part_mAP = yolov6_evaler.eval(model_ptq)
- print(part_mAP)
- # ONNX export
- quant_nn.TensorQuantizer.use_fb_fake_quant = True
- if args.export_batch_size is None:
- img = torch.zeros(1, 3, *args.img_size).to(device)
- export_file = args.weights.replace('.pt', '_partial_dynamic.onnx') # filename
- dynamic_axes = {"image_arrays": {0: "batch"}, "outputs": {0: "batch"}}
- torch.onnx.export(model_ptq,
- img,
- export_file,
- verbose=False,
- opset_version=13,
- training=torch.onnx.TrainingMode.EVAL,
- do_constant_folding=True,
- input_names=['image_arrays'],
- output_names=['outputs'],
- dynamic_axes=dynamic_axes
- )
- else:
- img = torch.zeros(args.export_batch_size, 3, *args.img_size).to(device)
- export_file = args.weights.replace('.pt', '_partial_bs{}.onnx'.format(args.export_batch_size)) # filename
- torch.onnx.export(model_ptq,
- img,
- export_file,
- verbose=False,
- opset_version=13,
- training=torch.onnx.TrainingMode.EVAL,
- do_constant_folding=True,
- input_names=['image_arrays'],
- output_names=['outputs']
- )
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/ptq.py b/cv/detection/yolov6/pytorch/tools/partial_quantization/ptq.py
deleted file mode 100644
index 6895a36ed0db779859d82fad27949c9a023ba80f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/ptq.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import torch
-import torch.nn as nn
-import copy
-
-from pytorch_quantization import nn as quant_nn
-from pytorch_quantization import tensor_quant
-from pytorch_quantization import calib
-from pytorch_quantization.tensor_quant import QuantDescriptor
-
-from tools.partial_quantization.utils import set_module, module_quant_disable
-
-def collect_stats(model, data_loader, batch_number, device='cuda'):
- """Feed data to the network and collect statistic"""
-
- # Enable calibrators
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- if module._calibrator is not None:
- module.disable_quant()
- module.enable_calib()
- else:
- module.disable()
-
- for i, data_tuple in enumerate(data_loader):
- image = data_tuple[0]
- image = image.float()/255.0
- model(image.to(device))
- if i + 1 >= batch_number:
- break
-
- # Disable calibrators
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- if module._calibrator is not None:
- module.enable_quant()
- module.disable_calib()
- else:
- module.enable()
-
-
-def compute_amax(model, **kwargs):
- # Load calib result
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- print(F"{name:40}: {module}")
- if module._calibrator is not None:
- if isinstance(module._calibrator, calib.MaxCalibrator):
- module.load_calib_amax()
- else:
- module.load_calib_amax(**kwargs)
-
-
-def quantable_op_check(k, quantable_ops):
- if quantable_ops is None:
- return True
-
- if k in quantable_ops:
- return True
- else:
- return False
-
-
-def quant_model_init(model, device):
-
- model_ptq = copy.deepcopy(model)
- model_ptq.eval()
- model_ptq.to(device)
- conv2d_weight_default_desc = tensor_quant.QUANT_DESC_8BIT_CONV2D_WEIGHT_PER_CHANNEL
- conv2d_input_default_desc = QuantDescriptor(num_bits=8, calib_method='histogram')
-
- convtrans2d_weight_default_desc = tensor_quant.QUANT_DESC_8BIT_CONVTRANSPOSE2D_WEIGHT_PER_CHANNEL
- convtrans2d_input_default_desc = QuantDescriptor(num_bits=8, calib_method='histogram')
-
- for k, m in model_ptq.named_modules():
- if 'proj_conv' in k:
- print("Skip Layer {}".format(k))
- continue
-
- if isinstance(m, nn.Conv2d):
- in_channels = m.in_channels
- out_channels = m.out_channels
- kernel_size = m.kernel_size
- stride = m.stride
- padding = m.padding
- quant_conv = quant_nn.QuantConv2d(in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- quant_desc_input = conv2d_input_default_desc,
- quant_desc_weight = conv2d_weight_default_desc)
- quant_conv.weight.data.copy_(m.weight.detach())
- if m.bias is not None:
- quant_conv.bias.data.copy_(m.bias.detach())
- else:
- quant_conv.bias = None
- set_module(model_ptq, k, quant_conv)
- elif isinstance(m, nn.ConvTranspose2d):
- in_channels = m.in_channels
- out_channels = m.out_channels
- kernel_size = m.kernel_size
- stride = m.stride
- padding = m.padding
- quant_convtrans = quant_nn.QuantConvTranspose2d(in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- quant_desc_input = convtrans2d_input_default_desc,
- quant_desc_weight = convtrans2d_weight_default_desc)
- quant_convtrans.weight.data.copy_(m.weight.detach())
- if m.bias is not None:
- quant_convtrans.bias.data.copy_(m.bias.detach())
- else:
- quant_convtrans.bias = None
- set_module(model_ptq, k, quant_convtrans)
- elif isinstance(m, nn.MaxPool2d):
- kernel_size = m.kernel_size
- stride = m.stride
- padding = m.padding
- dilation = m.dilation
- ceil_mode = m.ceil_mode
- quant_maxpool2d = quant_nn.QuantMaxPool2d(kernel_size,
- stride,
- padding,
- dilation,
- ceil_mode,
- quant_desc_input = conv2d_input_default_desc)
- set_module(model_ptq, k, quant_maxpool2d)
- else:
- # module can not be quantized, continue
- continue
-
- return model_ptq.to(device)
-
-
-def do_ptq(model, train_loader, batch_number, device):
- model_ptq = quant_model_init(model, device)
- # It is a bit slow since we collect histograms on CPU
- with torch.no_grad():
- collect_stats(model_ptq, train_loader, batch_number, device)
- compute_amax(model_ptq, method='entropy')
- return model_ptq
-
-
-def load_ptq(model, calib_path, device):
- model_ptq = quant_model_init(model, device)
- model_ptq.load_state_dict(torch.load(calib_path)['model'].state_dict())
- return model_ptq
-
-
-def partial_quant(model_ptq, quantable_ops=None):
- # ops not in quantable_ops will reserve full-precision.
- for k, m in model_ptq.named_modules():
- if quantable_op_check(k, quantable_ops):
- continue
- # enable full-precision
- if isinstance(m, quant_nn.QuantConv2d) or \
- isinstance(m, quant_nn.QuantConvTranspose2d) or \
- isinstance(m, quant_nn.QuantMaxPool2d):
- module_quant_disable(model_ptq, k)
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/sensitivity_analyse.py b/cv/detection/yolov6/pytorch/tools/partial_quantization/sensitivity_analyse.py
deleted file mode 100644
index bcf1fb09ac4b71f82a5eb87414516c8855fd77ac..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/sensitivity_analyse.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import argparse
-import time
-import sys
-import os
-
-ROOT = os.getcwd()
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT))
-
-sys.path.append('../../')
-
-from yolov6.models.effidehead import Detect
-from yolov6.layers.common import *
-from yolov6.utils.events import LOGGER, load_yaml
-from yolov6.utils.checkpoint import load_checkpoint
-from yolov6.data.data_load import create_dataloader
-from yolov6.utils.config import Config
-
-from tools.partial_quantization.eval import EvalerWrapper
-from tools.partial_quantization.utils import module_quant_enable, module_quant_disable, model_quant_disable
-from tools.partial_quantization.utils import quant_sensitivity_save, quant_sensitivity_load
-from tools.partial_quantization.ptq import do_ptq, load_ptq
-
-from pytorch_quantization import nn as quant_nn
-
-
-def quant_sensitivity_analyse(model_ptq, evaler):
- # disable all quantable layer
- model_quant_disable(model_ptq)
-
- # analyse each quantable layer
- quant_sensitivity = list()
- for k, m in model_ptq.named_modules():
- if isinstance(m, quant_nn.QuantConv2d) or \
- isinstance(m, quant_nn.QuantConvTranspose2d) or \
- isinstance(m, quant_nn.MaxPool2d):
- module_quant_enable(model_ptq, k)
- else:
- # module can not be quantized, continue
- continue
-
- eval_result = evaler.eval(model_ptq)
- print(eval_result)
- print("Quantize Layer {}, result mAP0.5 = {:0.4f}, mAP0.5:0.95 = {:0.4f}".format(k,
- eval_result[0],
- eval_result[1]))
- quant_sensitivity.append((k, eval_result[0], eval_result[1]))
- # disable this module sensitivity, anlayse next module
- module_quant_disable(model_ptq, k)
-
- return quant_sensitivity
-
-# python3 sensitivity_analyse.py --weights ../../assets/yolov6s_v2_reopt.pt --batch-size 32 --batch-number 4 --conf ../../configs/repopt/yolov6s_opt.py --data-root /path
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='./yolov6s_v2_reopt.pt', help='weights path')
- parser.add_argument('--data-root', type=str, default=None, help='train data path')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width
- parser.add_argument('--conf', type=str, default='../../configs/repopt/yolov6s_opt.py', help='model config')
- parser.add_argument('--batch-size', type=int, default=128, help='batch size')
- parser.add_argument('--batch-number', type=int, default=1, help='batch number')
- parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
- parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True')
- parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0, 1, 2, 3 or cpu')
- parser.add_argument('--calib-weights', type=str, default=None, help='weights with calibration parameter')
- parser.add_argument('--data-yaml', type=str, default='../../data/coco.yaml', help='data config')
- parser.add_argument('--eval-yaml', type=str, default='./eval.yaml', help='evaluation config')
- args = parser.parse_args()
- args.img_size *= 2 if len(args.img_size) == 1 else 1 # expand
- print(args)
- yolov6_evaler = EvalerWrapper(eval_cfg=load_yaml(args.eval_yaml))
- # Check device
- cuda = args.device != 'cpu' and torch.cuda.is_available()
- device = torch.device('cuda:0' if cuda else 'cpu')
- assert not (device.type == 'cpu' and args.half), '--half only compatible with GPU export, i.e. use --device 0'
- # Load PyTorch model
- model = load_checkpoint(args.weights, map_location=device, inplace=True, fuse=True) # load FP32 model
- model.eval()
-
- for layer in model.modules():
- if isinstance(layer, RepVGGBlock):
- layer.switch_to_deploy()
-
- for k, m in model.named_modules():
- if isinstance(m, Conv): # assign export-friendly activations
- if isinstance(m.act, nn.SiLU):
- m.act = SiLU()
- elif isinstance(m, Detect):
- m.inplace = args.inplace
-
- orig_mAP = yolov6_evaler.eval(model)
- print("Full Precision model mAP0.5={:.4f}, mAP0.5_0.95={:0.4f}".format(orig_mAP[0], orig_mAP[1]))
-
- # Step1: create dataloder
- cfg = Config.fromfile(args.conf)
- data_cfg = load_yaml(args.data_yaml)
- train_loader, _ = create_dataloader(
- args.data_root,
- img_size=args.img_size[0],
- batch_size=args.batch_size,
- stride=32,
- hyp=dict(cfg.data_aug),
- augment=True,
- shuffle=True,
- data_dict=data_cfg)
-
- # Step2: do post training quantization
- if args.calib_weights is None:
- model_ptq= do_ptq(model, train_loader, args.batch_number, device)
- torch.save({'model': model_ptq}, args.weights.replace('.pt', '_calib.pt'))
- else:
- model_ptq = load_ptq(model, args.calib_weights, device)
- quant_mAP = yolov6_evaler.eval(model_ptq)
- print("Post Training Quantization model mAP0.5={:.4f}, mAP0.5_0.95={:0.4f}".format(quant_mAP[0], quant_mAP[1]))
-
- # Step3: do sensitivity analysis and save sensistivity results
- quant_sensitivity = quant_sensitivity_analyse(model_ptq, yolov6_evaler)
- qfile = "{}_quant_sensitivity_{}_calib.txt".format(os.path.basename(args.weights).split('.')[0],
- args.batch_size * args.batch_number)
- quant_sensitivity_save(quant_sensitivity, qfile)
-
-
- quant_sensitivity.sort(key=lambda tup: tup[2], reverse=True)
- for sensitivity in quant_sensitivity:
- print(sensitivity)
diff --git a/cv/detection/yolov6/pytorch/tools/partial_quantization/utils.py b/cv/detection/yolov6/pytorch/tools/partial_quantization/utils.py
deleted file mode 100644
index 16cd009144329d26c830207e972d8a6cebb3093d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/partial_quantization/utils.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import os
-from pytorch_quantization import nn as quant_nn
-
-
-def set_module(model, submodule_key, module):
- tokens = submodule_key.split('.')
- sub_tokens = tokens[:-1]
- cur_mod = model
- for s in sub_tokens:
- cur_mod = getattr(cur_mod, s)
- setattr(cur_mod, tokens[-1], module)
-
-
-def get_module(model, submodule_key):
- sub_tokens = submodule_key.split('.')
- cur_mod = model
- for s in sub_tokens:
- cur_mod = getattr(cur_mod, s)
- return cur_mod
-
-
-def module_quant_disable(model, k):
- cur_module = get_module(model, k)
- if hasattr(cur_module, '_input_quantizer'):
- cur_module._input_quantizer.disable()
- if hasattr(cur_module, '_weight_quantizer'):
- cur_module._weight_quantizer.disable()
-
-
-def module_quant_enable(model, k):
- cur_module = get_module(model, k)
- if hasattr(cur_module, '_input_quantizer'):
- cur_module._input_quantizer.enable()
- if hasattr(cur_module, '_weight_quantizer'):
- cur_module._weight_quantizer.enable()
-
-
-def model_quant_disable(model):
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- module.disable()
-
-
-def model_quant_enable(model):
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- module.enable()
-
-
-def concat_quant_amax_fuse(ops_list):
- if len(ops_list) <= 1:
- return
-
- amax = -1
- for op in ops_list:
- if hasattr(op, '_amax'):
- op_amax = op._amax.detach().item()
- elif hasattr(op, '_input_quantizer'):
- op_amax = op._input_quantizer._amax.detach().item()
- else:
- print("Not quantable op, skip")
- return
- print("op amax = {:7.4f}, amax = {:7.4f}".format(op_amax, amax))
- if amax < op_amax:
- amax = op_amax
-
- print("amax = {:7.4f}".format(amax))
- for op in ops_list:
- if hasattr(op, '_amax'):
- op._amax.fill_(amax)
- elif hasattr(op, '_input_quantizer'):
- op._input_quantizer._amax.fill_(amax)
-
-
-def quant_sensitivity_load(file):
- assert os.path.exists(file), print("File {} does not exist".format(file))
- quant_sensitivity = list()
- with open(file, 'r') as qfile:
- lines = qfile.readlines()
- for line in lines:
- layer, mAP1, mAP2 = line.strip('\n').split(' ')
- quant_sensitivity.append((layer, float(mAP1), float(mAP2)))
-
- return quant_sensitivity
-
-
-def quant_sensitivity_save(quant_sensitivity, file):
- with open(file, 'w') as qfile:
- for item in quant_sensitivity:
- name, mAP1, mAP2 = item
- line = name + " " + "{:0.4f}".format(mAP1) + " " + "{:0.4f}".format(mAP2) + "\n"
- qfile.write(line)
diff --git a/cv/detection/yolov6/pytorch/tools/qat/README.md b/cv/detection/yolov6/pytorch/tools/qat/README.md
deleted file mode 100644
index deef45cb427cf167952d5d54073bdd4bff6727cf..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/qat/README.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Quantization-Aware Training
-
-As of v0.2.0 release, traditional post-training quantization (PTQ) produces a degraded performance of `YOLOv6-S` from 43.4% to 41.2%. This is however much improved compared with v0.1.0 since the most sensitve layers are removed. Yet it is not ready for deployment. Meanwhile, due to the inconsistency of reparameterization blocks during training and inference, quantization-aware training (QAT) cannot be directly integrated into YOLOv6. As a remedy, we first train a single-branch network called `YOLOv6-S-RepOpt` with [RepOptimizer](https://arxiv.org/pdf/2205.15242.pdf). It reaches 43.1% mAP and is very close to YOLOv6-S. We then apply our quantization strategy on `YOLOv6-S-RepOpt`.
-
-We apply post-training quantization to `YOLOv6-S-RepOpt`, and its mAP slightly drops by 0.5%. Hence it is necessary to use QAT to further improve the accuracy. Besides, we involve **channel-wise distillation** to accelerate the convergence. We finally reach a quantized model at 43.0% mAP.
-
-To deploy the quantized model on typical NVIDIA GPUs (e.g. T4), we export the model to the ONNX format, then we use TensorRT to build a serialized engine along with the computed scale cache. The performance arrives at **43.3% mAP**, only 0.1% left to match the fully float precision of `YOLOv6-S`.
-
-
-## Pre-requirements
-
-It is required to install `pytorch_quantization`, on top of which we build our quantization strategy.
-
-```python
-pip install --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com nvidia-pyindex
-pip install --extra-index-url=https://pypi.ngc.nvidia.com --trusted-host pypi.ngc.nvidia.com pytorch_quantization
-```
-
-## Training with RepOptimizer
-Firstly, train a `YOLOv6-S RepOpt` as follows, or download our realeased [checkpoint](https://github.com/meituan/YOLOv6/releases/download/0.2.0/yolov6s_v2_reopt.pt) and [scales](https://github.com/meituan/YOLOv6/releases/download/0.2.0/yolov6s_v2_scale.pt).
-* [Tutorial of RepOpt for YOLOv6](https://github.com/meituan/YOLOv6/blob/main/docs/tutorial_repopt.md)
-## PTQ
-We perform PTQ to get the range of activations and weights.
-```python
-CUDA_VISIBLE_DEVICES=0 python tools/train.py \
- --data ./data/coco.yaml \
- --output-dir ./runs/opt_train_v6s_ptq \
- --conf configs/repopt/yolov6s_opt_qat.py \
- --quant \
- --calib \
- --batch 32 \
- --workers 0
-```
-
-## QAT
-
-Our proposed QAT strategy comes with channel-wise distillation. It loades calibrated ReOptimizer-trained model and trains for 10 epochs. To reproduce the result,
-
-```python
-CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 \
- tools/train.py \
- --data ./data/coco.yaml \
- --output-dir ./runs/opt_train_v6s_qat \
- --conf configs/repopt/yolov6s_opt_qat.py \
- --quant \
- --distill \
- --distill_feat \
- --batch 128 \
- --epochs 10 \
- --workers 32 \
- --teacher_model_path ./assets/yolov6s_v2_reopt_43.1.pt \
- --device 0,1,2,3,4,5,6,7
-```
-## ONNX Export
-To export to ONNX,
-```python
-python3 qat_export.py --weights yolov6s_v2_reopt_43.1.pt --quant-weights yolov6s_v2_reopt_qat_43.0.pt --graph-opt --export-batch-size 1
-```
-
-## TensorRT Deployment
-
-To build a TRT engine,
-
-```python
-trtexec --workspace=1024 --percentile=99 --streams=1 --int8 --fp16 --avgRuns=10 --onnx=yolov6s_v2_reopt_qat_43.0_bs1.sim.onnx --calib=yolov6s_v2_reopt_qat_43.0_remove_qdq_bs1_calibration_addscale.cache --saveEngine=yolov6s_v2_reopt_qat_43.0_bs1.sim.trt
-```
-You can directly build engine with [yolov6s_v2_quant.onnx](https://github.com/meituan/YOLOv6/releases/download/0.2.0/yolov6s_v2_reopt_qat_43.0_remove_qdq_bs1.sim.onnx) and [yolov6s_v2_calibration.cache](https://github.com/meituan/YOLOv6/releases/download/0.2.0/yolov6s_v2_reopt_qat_43.0_remove_qdq_bs1_calibration_addscale.cache)
-
-## Performance Comparison
-
-We release our quantized and graph-optimized YOLOv6-S (v0.2.0) model. The following throughput is tested with TensorRT 8.4 on a NVIDIA Tesla T4 GPU.
-
-| Model | Size | Precision |mAPval
0.5:0.95 | SpeedT4
trt b1
(fps) | SpeedT4
trt b32
(fps) |
-| :-------------- | ----------- | ----------- |:----------------------- | ---------------------------------------- | -----------------------------------|
-| [**YOLOv6-S RepOpt**] | 640 | INT8 |43.3 | 619 | 924 |
-| [**YOLOv6-S**] | 640 | FP16 |43.4 | 377 | 541 |
-| [**YOLOv6-T RepOpt**] | 640 | INT8 |39.8 | 741 | 1167 |
-| [**YOLOv6-T**] | 640 | FP16 |40.3 | 449 | 659 |
-| [**YOLOv6-N RepOpt**] | 640 | INT8 |34.8 | 1114 | 1828 |
-| [**YOLOv6-N**] | 640 | FP16 |35.9 | 802 | 1234 |
diff --git a/cv/detection/yolov6/pytorch/tools/qat/onnx_utils.py b/cv/detection/yolov6/pytorch/tools/qat/onnx_utils.py
deleted file mode 100644
index 19aa1311189e98ab63c17345052960815565fb05..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/qat/onnx_utils.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import os.path
-
-import onnx
-import numpy as np
-import struct
-import sys
-import copy
-
-def search_node_by_output_id(nodes, output_id: str):
- prev_node = None
- for node_id, node in enumerate(nodes):
- if output_id in node.output:
- prev_node = node
- break
- return prev_node
-
-def get_prev_node(nodes, node):
- node_input_list = node.input
- prev_node_list = []
- for node_id, node in enumerate(nodes):
- for node_output in node.output:
- if node_output in node_input_list:
- prev_node_list.append(node)
- return prev_node_list
-
-def get_next_node(nodes, node):
- node_output_list = node.output
- next_node_list = []
- for node_id, node in enumerate(nodes):
- for node_input in node.input:
- if node_input in node_output_list:
- next_node_list.append(node)
- return next_node_list
-
-def get_conv_qdq_node(nodes, conv_node):
- # get conv input
- conv_input_id = conv_node.input[0]
- # print(conv_input_id)
- dequant_node = None
- quant_node = None
- # get dequant node by conv input
- for node_id, node in enumerate(nodes):
- if node.op_type == "DequantizeLinear" and conv_input_id in node.output:
- dequant_node = node
- break
- # get quant node by dequant input
- if dequant_node is not None:
- dequant_input_id = dequant_node.input[0]
- # print(dequant_input_id)
- for node_id, node in enumerate(nodes):
- if node.op_type == "QuantizeLinear" and dequant_input_id in node.output:
- quant_node = node
- break
- # print(dequant_node)
- # print(quant_node)
- return dequant_node, quant_node
-
-def onnx_conv_horizon_fuse(onnx_model):
- onnx_replica = copy.deepcopy(onnx_model)
- graph = onnx_replica.graph
- nodes = graph.node
- # find qualified add op
- pattern = []
- for node_id, node in enumerate(graph.node):
- if node.op_type == "Add":
- avail_count = 0
- for input_id in node.input:
- prev_node = search_node_by_output_id(graph.node, input_id)
- # prev node must be BatchNorm or Conv
- if prev_node is not None:
- if prev_node.op_type in ['BatchNormalization', 'Conv'] and \
- len(prev_node.output) == 1:
- avail_count += 1
- if avail_count == 2:
- pattern.append(node)
- # print(pattern)
-
- # process each add
- for add_node in pattern:
- prev_add_node_list = get_prev_node(nodes, add_node)
- # collect conv node
- conv_node_list = []
- for node in prev_add_node_list:
- if node.op_type == "BatchNormalization":
- prev_node_list = get_prev_node(nodes, node)
- assert len(prev_node_list) == 1 and prev_node_list[0].op_type == "Conv", \
- "Conv horizon fusion pattern not match"
- conv_node_list.append(prev_node_list[0])
- else:
- conv_node_list.append(node)
-
- # print(conv_node_list)
- # collect qdq node
- qdq_node_list = []
- for node in conv_node_list:
- dequant_node, quant_node = get_conv_qdq_node(nodes, node)
- assert dequant_node is not None and quant_node is not None, "Conv horizon fusion pattern not match"
- qdq_node_list.extend((dequant_node, quant_node))
-
- # find scale node
- scale_node_list = []
- for qdq_node in qdq_node_list:
- scale_iput_id = qdq_node.input[1]
- for node in nodes:
- if scale_iput_id in node.output:
- scale_node_list.append(node)
- # print(scale_node_list)
- # get max scale
- max = 0
- for scale_node in scale_node_list:
- val = np.frombuffer(scale_node.attribute[0].t.raw_data, dtype=np.float32)[0]
- print(val)
- if max < val:
- max = val
- # rewrite max scale
- for scale_node in scale_node_list:
- scale_node.attribute[0].t.raw_data = bytes(struct.pack("f", max))
-
- # check
- for scale_node in scale_node_list:
- val = np.frombuffer(scale_node.attribute[0].t.raw_data, dtype=np.float32)[0]
- print(val)
-
- return onnx_replica
-
-def onnx_add_insert_qdqnode(onnx_model):
- onnx_replica = copy.deepcopy(onnx_model)
- graph = onnx_replica.graph
- nodes = graph.node
- # find qualified add op
- patterns = []
- for node_id, node in enumerate(graph.node):
- if node.op_type == "Add":
- same_input_node_list = []
- same_input = None
- for add_input in node.input:
- for other_id, other_node in enumerate(nodes):
- if other_id != node_id:
- for other_input in other_node.input:
- if other_input == add_input:
- same_input_node_list.append(other_node)
- same_input = other_input
- break
- # Find previous node of Add, which has two output, one is QuantizeLinear, other is Add
- if len(same_input_node_list) == 1 and same_input_node_list[0].op_type == 'QuantizeLinear':
- prev_add_node = search_node_by_output_id(nodes, same_input)
- dequant_node = get_next_node(nodes, same_input_node_list[0])[0]
- patterns.append((node, prev_add_node, same_input_node_list[0], dequant_node, same_input))
- print(patterns)
- for pattern in patterns:
- add_node, prev_add_node, quant_node, dequant_node, same_input = pattern
- dq_x, dq_s, dq_z = dequant_node.input
- new_quant_node = onnx.helper.make_node('QuantizeLinear',
- inputs=quant_node.input,
- outputs=[prev_add_node.name + "_Dequant"],
- name=prev_add_node.name + "_QuantizeLinear")
- new_dequant_node = onnx.helper.make_node('DequantizeLinear',
- inputs=[prev_add_node.name + "_Dequant", dq_s, dq_z],
- outputs=[prev_add_node.name + "_Add"],
- name=prev_add_node.name + "_DequantizeLinear")
-
- add_node.input.remove(same_input)
- add_node.input.append(prev_add_node.name + "_Add")
- for node_id, node in enumerate(graph.node):
- if node.name == prev_add_node.name:
- graph.node.insert(node_id + 1, new_quant_node)
- graph.node.insert(node_id + 2, new_dequant_node)
-
- return onnx_replica
-
- # new_dequant_node = onnx.helper.make_node('DequantizeLinear',
- # inputs=quant_node.input,
- # outputs=prev_add_node.output,
- # name=prev_add_node.name + "_DequantizeLinear")
-
-
-def onnx_remove_qdqnode(onnx_model):
- onnx_replica = copy.deepcopy(onnx_model)
- graph = onnx_replica.graph
- nodes = graph.node
-
- # demo for remove node with first input and output
- in_rename_map = {}
- scale_node_list = []
- zero_node_list = []
- activation_map = {}
- for node_id, node in enumerate(graph.node):
- if node.op_type == "QuantizeLinear":
- # node input
- in_name = node.input[0]
- scale_name = node.input[1]
- zero_name = node.input[2]
- # print(scale_name)
- # node output
- out_name = node.output[0]
- # record input, remove one node, set node's input to its next
- in_rename_map[out_name] = in_name
- scale_node_list.append(scale_name)
- zero_node_list.append(zero_name)
- # for i, node in enumerate(graph.node):
- # if node.output[0] == scale_name:
- # if len(node.attribute[0].t.dims) > 0:
- # print(node.attribute[0].t.dims)
- # graph.node.remove(nodes[i])
- # for i, node in enumerate(graph.node):
- # if node.output[0] == zero_name:
- # graph.node.remove(nodes[i])
- # record scale of activation
- for i, node in enumerate(graph.node):
- if node.output[0] == scale_name:
- if len(node.attribute[0].t.dims) == 0:
- # print(node.attribute[0].t.raw_data)
- # print(np.frombuffer(node.attribute[0].t.raw_data, dtype=np.float32))
- val = np.frombuffer(node.attribute[0].t.raw_data, dtype=np.float32)[0]
- if in_name in activation_map.keys():
- old_val = struct.unpack('!f', bytes.fromhex(activation_map[in_name]))[0]
- # print("Already record, old {:.4f}, new {:.4f}".format(old_val, val))
- if val > old_val:
- activation_map[in_name] = struct.pack('>f', val).hex()
- else:
- activation_map[in_name] = struct.pack('>f', val).hex()
- # remove QuantizeLinear node
- graph.node.remove(nodes[node_id])
-
-
- # relink
- for node_id, node in enumerate(graph.node):
- for in_id, in_name in enumerate(node.input):
- if in_name in in_rename_map.keys():
- # set node input == removed node's input
- node.input[in_id] = in_rename_map[in_name]
-
- in_rename_map = {}
- # activation_map = {}
- for node_id, node in enumerate(graph.node):
- if node.op_type == "DequantizeLinear":
- in_name = node.input[0]
- scale_name = node.input[1]
- zero_name = node.input[2]
- # print(scale_name)
- out_name = node.output[0]
- in_rename_map[out_name] = in_name
- graph.node.remove(nodes[node_id])
- scale_node_list.append(scale_name)
- zero_node_list.append(zero_name)
-
- # relink
- for node_id, node in enumerate(graph.node):
- for in_id, in_name in enumerate(node.input):
- if in_name in in_rename_map.keys():
- node.input[in_id] = in_rename_map[in_name]
-
- nodes = graph.node
- for node_name in (scale_node_list + zero_node_list):
- for node_id, node in enumerate(graph.node):
- if node.name == node_name:
- # print("node input={}".format(node.input))
- # for node_input in node.input:
- # print(node_input)
- # graph.node.remove(node_input)
- graph.node.remove(nodes[node_id])
-
- for node_name in (scale_node_list + zero_node_list):
- for node_id, node in enumerate(graph.node):
- if node.output[0] == node_name:
- # print("node input={}".format(node.input))
- # for node_input in node.input:
- # print(node_input)
- # graph.node.remove(node_input)
- graph.node.remove(nodes[node_id])
-
- return onnx_replica, activation_map
-
-def save_calib_cache_file(cache_file, activation_map, headline='TRT-8XXX-EntropyCalibration2\n'):
- with open(os.path.join(cache_file), 'w') as cfile:
- cfile.write(headline)
- for k, v in activation_map.items():
- cfile.write("{}: {}\n".format(k, v))
-
-def get_remove_qdq_onnx_and_cache(onnx_file):
- model = onnx.load(onnx_file)
- # onnx_insert = onnx_add_insert_qdqnode(model)
- model_wo_qdq, activation_map = onnx_remove_qdqnode(model)
- onnx_name, onnx_dir = os.path.basename(onnx_file), os.path.dirname(onnx_file)
- onnx_new_name = onnx_name.replace('.onnx', '_remove_qdq.onnx')
- onnx.save(model_wo_qdq, os.path.join(onnx_dir, onnx_new_name))
- cache_name = onnx_new_name.replace('.onnx', '_add_insert_qdq_calibration.cache')
- save_calib_cache_file(os.path.join(onnx_dir, cache_name), activation_map)
-
-if __name__ == '__main__':
-
- onnx_file = sys.argv[1]
- get_remove_qdq_onnx_and_cache(onnx_file)
diff --git a/cv/detection/yolov6/pytorch/tools/qat/qat_export.py b/cv/detection/yolov6/pytorch/tools/qat/qat_export.py
deleted file mode 100644
index 541005d3c6b3bd7aa4f788dc791e9f56eb58e635..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/qat/qat_export.py
+++ /dev/null
@@ -1,169 +0,0 @@
-import argparse
-import time
-import sys
-import os
-ROOT = os.getcwd()
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT))
-sys.path.append('../../')
-from yolov6.models.effidehead import Detect
-from yolov6.models.yolo import build_model
-from yolov6.layers.common import *
-from yolov6.utils.events import LOGGER, load_yaml
-from yolov6.utils.checkpoint import load_checkpoint, load_state_dict
-from yolov6.utils.config import Config
-from tools.partial_quantization.eval import EvalerWrapper
-from tools.partial_quantization.utils import get_module, concat_quant_amax_fuse
-from tools.qat.qat_utils import qat_init_model_manu
-from pytorch_quantization import nn as quant_nn
-from onnx_utils import get_remove_qdq_onnx_and_cache
-
-op_concat_fusion_list = [
- ('backbone.ERBlock_5.2.m', 'backbone.ERBlock_5.2.cv2.conv'),
- ('backbone.ERBlock_5.0.conv', 'neck.Rep_p4.conv1.conv', 'neck.upsample_feat0_quant'),
- ('backbone.ERBlock_4.0.conv', 'neck.Rep_p3.conv1.conv', 'neck.upsample_feat1_quant'),
- ('neck.upsample1.upsample_transpose', 'neck.Rep_n3.conv1.conv'),
- ('neck.upsample0.upsample_transpose', 'neck.Rep_n4.conv1.conv'),
- #
- ('detect.reg_convs.0.conv', 'detect.cls_convs.0.conv'),
- ('detect.reg_convs.1.conv', 'detect.cls_convs.1.conv'),
- ('detect.reg_convs.2.conv', 'detect.cls_convs.2.conv'),
-]
-
-def zero_scale_fix(model, device):
-
- for k, m in model.named_modules():
- # print(k, m)
- if isinstance(m, quant_nn.QuantConv2d) or \
- isinstance(m, quant_nn.QuantConvTranspose2d):
- # print(m)
- # print(m._weight_quantizer._amax)
- weight_amax = m._weight_quantizer._amax.detach().cpu().numpy()
- # print(weight_amax)
- print(k)
- ones = np.ones_like(weight_amax)
- print("zero scale number = {}".format(np.sum(weight_amax == 0.0)))
- weight_amax = np.where(weight_amax == 0.0, ones, weight_amax)
- m._weight_quantizer._amax.copy_(torch.from_numpy(weight_amax).to(device))
- else:
- # module can not be quantized, continue
- continue
-
-# python3 qat_export.py --weights yolov6s_v2_reopt.pt --quant-weights yolov6s_v2_reopt_qat_43.0.pt --export-batch-size 1 --conf ../../configs/repopt/yolov6s_opt_qat.py
-# python3 qat_export.py --weights v6s_t.pt --quant-weights yolov6t_v2_reopt_qat_40.1.pt --export-batch-size 1 --conf ../../configs/repopt/yolov6_tiny_opt_qat.py
-# python3 qat_export.py --weights v6s_n.pt --quant-weights yolov6n_v2_reopt_qat_34.9.pt --export-batch-size 1 --conf ../../configs/repopt/yolov6n_opt_qat.py
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='./yolov6s_v2_reopt.pt', help='weights path')
- parser.add_argument('--quant-weights', type=str, default='./yolov6s_v2_reopt_qat_43.0.pt', help='calib weights path')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width
- parser.add_argument('--conf', type=str, default='../../configs/repopt/yolov6s_opt_qat.py', help='model config')
- parser.add_argument('--export-batch-size', type=int, default=None, help='export batch size')
- parser.add_argument('--calib', action='store_true', default=False, help='calibrated model')
- parser.add_argument('--scale-fix', action='store_true', help='enable scale fix')
- parser.add_argument('--fuse-bn', action='store_true', help='fuse bn')
- parser.add_argument('--graph-opt', action='store_true', help='enable graph optimizer')
- parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True')
- parser.add_argument('--end2end', action='store_true', help='export end2end onnx')
- parser.add_argument('--trt-version', type=int, default=8, help='tensorrt version')
- parser.add_argument('--with-preprocess', action='store_true', help='export bgr2rgb and normalize')
- parser.add_argument('--max-wh', type=int, default=None, help='None for tensorrt nms, int value for onnx-runtime nms')
- parser.add_argument('--topk-all', type=int, default=100, help='topk objects for every images')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='iou threshold for NMS')
- parser.add_argument('--conf-thres', type=float, default=0.4, help='conf threshold for NMS')
- parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0, 1, 2, 3 or cpu')
- parser.add_argument('--eval-yaml', type=str, default='../partial_quantization/eval.yaml', help='evaluation config')
- args = parser.parse_args()
- args.img_size *= 2 if len(args.img_size) == 1 else 1 # expand
- print(args)
- t = time.time()
- # Check device
- cuda = args.device != 'cpu' and torch.cuda.is_available()
- device = torch.device('cuda:0' if cuda else 'cpu')
- assert not (device.type == 'cpu' and args.half), '--half only compatible with GPU export, i.e. use --device 0'
- model = load_checkpoint(args.weights, map_location=device, inplace=args.inplace, fuse=args.fuse_bn)
- yolov6_evaler = EvalerWrapper(eval_cfg=load_yaml(args.eval_yaml))
- # orig_mAP = yolov6_evaler.eval(model)
- for layer in model.modules():
- if isinstance(layer, RepVGGBlock):
- layer.switch_to_deploy()
- for k, m in model.named_modules():
- if isinstance(m, Conv): # assign export-friendly activations
- if isinstance(m.act, nn.SiLU):
- m.act = SiLU()
- elif isinstance(m, Detect):
- m.inplace = args.inplace
- # Load PyTorch model
- cfg = Config.fromfile(args.conf)
- # init qat model
- qat_init_model_manu(model, cfg, args)
- print(model)
- model.neck.upsample_enable_quant(cfg.ptq.num_bits, cfg.ptq.calib_method)
- ckpt = torch.load(args.quant_weights)
- model.load_state_dict(ckpt['model'].float().state_dict())
- print(model)
- model.to(device)
- if args.scale_fix:
- zero_scale_fix(model, device)
- if args.graph_opt:
- # concat amax fusion
- for sub_fusion_list in op_concat_fusion_list:
- ops = [get_module(model, op_name) for op_name in sub_fusion_list]
- concat_quant_amax_fuse(ops)
- qat_mAP = yolov6_evaler.eval(model)
- print(qat_mAP)
- if args.end2end:
- from yolov6.models.end2end import End2End
- model = End2End(model, max_obj=args.topk_all, iou_thres=args.iou_thres,score_thres=args.conf_thres,
- max_wh=args.max_wh, device=device, trt_version=args.trt_version, with_preprocess=args.with_preprocess)
- # ONNX export
- quant_nn.TensorQuantizer.use_fb_fake_quant = True
- if args.export_batch_size is None:
- img = torch.zeros(1, 3, *args.img_size).to(device)
- export_file = args.quant_weights.replace('.pt', '_dynamic.onnx') # filename
- if args.graph_opt:
- export_file = export_file.replace('.onnx', '_graph_opt.onnx')
- if args.end2end:
- export_file = export_file.replace('.onnx', '_e2e.onnx')
- dynamic_axes = {
- "image_arrays": {0: "batch"},
- }
- if args.end2end:
- dynamic_axes["num_dets"] = {0: "batch"}
- dynamic_axes["det_boxes"] = {0: "batch"}
- dynamic_axes["det_scores"] = {0: "batch"}
- dynamic_axes["det_classes"] = {0: "batch"}
- else:
- dynamic_axes["outputs"] = {0: "batch"}
- torch.onnx.export(model,
- img,
- export_file,
- verbose=False,
- opset_version=13,
- training=torch.onnx.TrainingMode.EVAL,
- do_constant_folding=True,
- input_names=['images'],
- output_names=['num_dets', 'det_boxes', 'det_scores', 'det_classes']
- if args.end2end else ['outputs'],
- dynamic_axes=dynamic_axes
- )
- else:
- img = torch.zeros(args.export_batch_size, 3, *args.img_size).to(device)
- export_file = args.quant_weights.replace('.pt', '_bs{}.onnx'.format(args.export_batch_size)) # filename
- if args.graph_opt:
- export_file = export_file.replace('.onnx', '_graph_opt.onnx')
- if args.end2end:
- export_file = export_file.replace('.onnx', '_e2e.onnx')
- torch.onnx.export(model,
- img,
- export_file,
- verbose=False,
- opset_version=13,
- training=torch.onnx.TrainingMode.EVAL,
- do_constant_folding=True,
- input_names=['images'],
- output_names=['num_dets', 'det_boxes', 'det_scores', 'det_classes']
- if args.end2end else ['outputs'],
- )
-
- get_remove_qdq_onnx_and_cache(export_file)
diff --git a/cv/detection/yolov6/pytorch/tools/qat/qat_utils.py b/cv/detection/yolov6/pytorch/tools/qat/qat_utils.py
deleted file mode 100644
index e5762726ff4ccabdb4ecb7095eb5ddc13de85fd3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/qat/qat_utils.py
+++ /dev/null
@@ -1,153 +0,0 @@
-from tqdm import tqdm
-import torch
-import torch.nn as nn
-
-from pytorch_quantization import nn as quant_nn
-from pytorch_quantization import tensor_quant
-from pytorch_quantization import calib
-from pytorch_quantization.tensor_quant import QuantDescriptor
-
-from tools.partial_quantization.utils import set_module, module_quant_disable
-
-def collect_stats(model, data_loader, num_batches):
- """Feed data to the network and collect statistic"""
-
- # Enable calibrators
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- if module._calibrator is not None:
- module.disable_quant()
- module.enable_calib()
- else:
- module.disable()
-
- for i, (image, _, _, _) in tqdm(enumerate(data_loader), total=num_batches):
- image = image.float()/255.0
- model(image.cuda())
- if i >= num_batches:
- break
-
- # Disable calibrators
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- if module._calibrator is not None:
- module.enable_quant()
- module.disable_calib()
- else:
- module.enable()
-
-def compute_amax(model, **kwargs):
- # Load Calib result
- for name, module in model.named_modules():
- if isinstance(module, quant_nn.TensorQuantizer):
- print(F"{name:40}: {module}")
- if module._calibrator is not None:
- #MinMaxCalib
- if isinstance(module._calibrator, calib.MaxCalibrator):
- module.load_calib_amax()
- else:
- #HistogramCalib
- module.load_calib_amax(**kwargs)
- model.cuda()
-
-def ptq_calibrate(model, train_loader, cfg):
- model.eval()
- model.cuda()
- # It is a bit slow since we collect histograms on CPU
- with torch.no_grad():
- collect_stats(model, train_loader, cfg.ptq.calib_batches)
- compute_amax(model, method=cfg.ptq.histogram_amax_method, percentile=cfg.ptq.histogram_amax_percentile)
-
-def qat_init_model_manu(model, cfg, args):
- # print(model)
- conv2d_weight_default_desc = tensor_quant.QUANT_DESC_8BIT_CONV2D_WEIGHT_PER_CHANNEL
- conv2d_input_default_desc = QuantDescriptor(num_bits=cfg.ptq.num_bits, calib_method=cfg.ptq.calib_method)
-
- convtrans2d_weight_default_desc = tensor_quant.QUANT_DESC_8BIT_CONVTRANSPOSE2D_WEIGHT_PER_CHANNEL
- convtrans2d_input_default_desc = QuantDescriptor(num_bits=cfg.ptq.num_bits, calib_method=cfg.ptq.calib_method)
-
- for k, m in model.named_modules():
- if 'proj_conv' in k:
- print("Skip Layer {}".format(k))
- continue
- if args.calib is True and cfg.ptq.sensitive_layers_skip is True:
- if k in cfg.ptq.sensitive_layers_list:
- print("Skip Layer {}".format(k))
- continue
- # print(k, m)
- if isinstance(m, nn.Conv2d):
- # print("in_channel = {}".format(m.in_channels))
- # print("out_channel = {}".format(m.out_channels))
- # print("kernel size = {}".format(m.kernel_size))
- # print("stride size = {}".format(m.stride))
- # print("pad size = {}".format(m.padding))
- in_channels = m.in_channels
- out_channels = m.out_channels
- kernel_size = m.kernel_size
- stride = m.stride
- padding = m.padding
- quant_conv = quant_nn.QuantConv2d(in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- quant_desc_input = conv2d_input_default_desc,
- quant_desc_weight = conv2d_weight_default_desc)
- quant_conv.weight.data.copy_(m.weight.detach())
- if m.bias is not None:
- quant_conv.bias.data.copy_(m.bias.detach())
- else:
- quant_conv.bias = None
- set_module(model, k, quant_conv)
- elif isinstance(m, nn.ConvTranspose2d):
- # print("in_channel = {}".format(m.in_channels))
- # print("out_channel = {}".format(m.out_channels))
- # print("kernel size = {}".format(m.kernel_size))
- # print("stride size = {}".format(m.stride))
- # print("pad size = {}".format(m.padding))
- in_channels = m.in_channels
- out_channels = m.out_channels
- kernel_size = m.kernel_size
- stride = m.stride
- padding = m.padding
- quant_convtrans = quant_nn.QuantConvTranspose2d(in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- quant_desc_input = convtrans2d_input_default_desc,
- quant_desc_weight = convtrans2d_weight_default_desc)
- quant_convtrans.weight.data.copy_(m.weight.detach())
- if m.bias is not None:
- quant_convtrans.bias.data.copy_(m.bias.detach())
- else:
- quant_convtrans.bias = None
- set_module(model, k, quant_convtrans)
- elif isinstance(m, nn.MaxPool2d):
- # print("kernel size = {}".format(m.kernel_size))
- # print("stride size = {}".format(m.stride))
- # print("pad size = {}".format(m.padding))
- # print("dilation = {}".format(m.dilation))
- # print("ceil mode = {}".format(m.ceil_mode))
- kernel_size = m.kernel_size
- stride = m.stride
- padding = m.padding
- dilation = m.dilation
- ceil_mode = m.ceil_mode
- quant_maxpool2d = quant_nn.QuantMaxPool2d(kernel_size,
- stride,
- padding,
- dilation,
- ceil_mode,
- quant_desc_input = conv2d_input_default_desc)
- set_module(model, k, quant_maxpool2d)
- else:
- # module can not be quantized, continue
- continue
-
-def skip_sensitive_layers(model, sensitive_layers):
- print('Skip sensitive layers...')
- for name, module in model.named_modules():
- if name in sensitive_layers:
- print(F"Disable {name}")
- module_quant_disable(model, name)
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/mnn/README.md b/cv/detection/yolov6/pytorch/tools/quantization/mnn/README.md
deleted file mode 100644
index 91f12c935e430d85e70b9494768513777e078e31..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/mnn/README.md
+++ /dev/null
@@ -1 +0,0 @@
-# Coming soon
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/ppq/ProgramEntrance.py b/cv/detection/yolov6/pytorch/tools/quantization/ppq/ProgramEntrance.py
deleted file mode 100644
index 38c9c668598464c74c5c51b630d8630a7fa40c8a..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/ppq/ProgramEntrance.py
+++ /dev/null
@@ -1,189 +0,0 @@
-try:
- from ppq.core.config import PPQ_CONFIG
- if PPQ_CONFIG.VERSION < '0.6.6':
- raise ValueError('为了运行该脚本的内容,你必须安装更高版本的 PPQ(>0.6.6)')
-
- import ppq.lib as PFL
- from ppq import TargetPlatform, TorchExecutor, graphwise_error_analyse
- from ppq.api import ENABLE_CUDA_KERNEL
- from ppq.api.interface import load_onnx_graph
- from ppq.core import (QuantizationPolicy, QuantizationProperty,
- RoundingPolicy)
- from ppq.IR import Operation
- from ppq.quantization.optim import (LearnedStepSizePass,
- ParameterBakingPass,
- ParameterQuantizePass,
- QuantAlignmentPass, QuantizeFusionPass,
- QuantizeSimplifyPass,
- RuntimeCalibrationPass)
-
-except ImportError:
- raise Exception('为了运行脚本内容,你必须安装 PPQ 量化工具(https://github.com/openppl-public/ppq)')
-from typing import List
-
-import torch
-
-# ------------------------------------------------------------
-# 在这个例子中我们将向你展示如何使用 INT8 量化一个 Yolo v6 模型
-# 我们使用随机数据进行量化,这并不能得到好的量化结果。
-# 在量化你的网络时,你应当使用真实数据和正确的预处理。
-#
-# 根据你选取的目标平台,PPQ 可以为 TensorRT, Openvino, Ncnn 等诸多平台生成量化模型
-# ------------------------------------------------------------
-graph = load_onnx_graph(onnx_import_file='Models/det_model/yolov6s.onnx')
-dataset = [torch.rand(size=[1, 3, 640, 640]) for _ in range(64)]
-
-# -----------------------------------------------------------
-# 我们将借助 PFL - PPQ Foundation Library, 即 PPQ 基础类库完成量化
-# 这是 PPQ 自 0.6.6 以来推出的新的量化 api 接口,这一接口是提供给
-# 算法工程师、部署工程师、以及芯片研发人员使用的,它更为灵活。
-# 我们将手动使用 Quantizer 完成算子量化信息初始化, 并且手动完成模型的调度工作
-#
-# 在开始之前,我需要向你介绍量化器、量化信息以及调度表
-# 量化信息在 PPQ 中是由 TensorQuantizationConfig(TQC) 进行描述的
-# 这个结构体描述了我要如何去量化一个数据,其中包含了量化位宽、量化策略、
-# 量化 Scale, offset 等内容。
-# ------------------------------------------------------------
-from ppq import TensorQuantizationConfig as TQC
-
-MyTQC = TQC(
- policy = QuantizationPolicy(
- QuantizationProperty.SYMMETRICAL +
- QuantizationProperty.LINEAR +
- QuantizationProperty.PER_TENSOR),
- rounding=RoundingPolicy.ROUND_HALF_EVEN,
- num_of_bits=8, quant_min=-128, quant_max=127,
- exponent_bits=0, channel_axis=None,
- observer_algorithm='minmax'
-)
-# ------------------------------------------------------------
-# 作为示例,我们创建了一个 "线性" "对称" "Tensorwise" 的量化信息
-# 这三者皆是该量化信息的 QuantizationPolicy 的一部分
-# 同时要求该量化信息使用 ROUND_HALF_EVEN 方式进行取整
-# 量化位宽为 8 bit,其中指数部分为 0 bit
-# 量化上限为 127.0,下限则为 -128.0
-# 这是一个 Tensorwise 的量化信息,因此 channel_axis = None
-# observer_algorithm 表示在未来使用 minmax calibration 方法确定该量化信息的 scale
-
-# 上述例子完成了该 TQC 的初始化,但并未真正启用该量化信息
-# MyTQC.scale, MyTQC.offset 仍然为空,它们必须经过 calibration 才会具有有意义的值
-# 并且他目前的状态 MyTQC.state 仍然是 Quantization.INITIAL,这意味着在计算时该 TQC 并不会参与运算。
-# ------------------------------------------------------------
-
-# ------------------------------------------------------------
-# 接下来我们向你介绍量化器,这是 PPQ 中的一个核心类型
-# 它的职责是为网络中所有处于量化区的算子初始化量化信息(TQC)
-# PPQ 中实现了一堆不同的量化器,它们分别适配不同的情形
-# 在这个例子中,我们分别创建了 TRT_INT8, GRAPHCORE_FP8, TRT_FP8 三种不同的量化器
-# 由它们所生成的量化信息是不同的,为此你可以访问它们的源代码
-# 位于 ppq.quantization.quantizer 中,查看它们初始化量化信息的逻辑。
-# ------------------------------------------------------------
-_ = PFL.Quantizer(platform=TargetPlatform.TRT_FP8, graph=graph) # 取得 TRT_FP8 所对应的量化器
-_ = PFL.Quantizer(platform=TargetPlatform.GRAPHCORE_FP8, graph=graph) # 取得 GRAPHCORE_FP8 所对应的量化器
-quantizer = PFL.Quantizer(platform=TargetPlatform.TRT_INT8, graph=graph) # 取得 TRT_INT8 所对应的量化器
-
-# ------------------------------------------------------------
-# 调度器是 PPQ 中另一核心类型,它负责切分计算图
-# 在量化开始之前,你的计算图将被切分成可量化区域,以及不可量化区域
-# 不可量化区域往往就是那些执行 Shape 推断的算子所构成的子图
-# *** 量化器只为量化区的算子初始化量化信息 ***
-# 调度信息将被写在算子的属性中,你可以通过 op.platform 来访问每一个算子的调度信息
-# ------------------------------------------------------------
-dispatching = PFL.Dispatcher(graph=graph).dispatch( # 生成调度表
- quant_types=quantizer.quant_operation_types)
-
-for op in graph.operations.values():
- # quantize_operation - 为算子初始化量化信息,platform 传递了算子的调度信息
- # 如果你的算子被调度到 TargetPlatform.FP32 上,则该算子不量化
- # 你可以手动修改调度信息
- dispatching['Op1'] = TargetPlatform.FP32 # 将 Op1 强行送往非量化区
- dispatching['Op2'] = TargetPlatform.TRT_INT8 # 将 Op2 强行送往量化区
- quantizer.quantize_operation(
- op_name = op.name, platform = dispatching[op.name])
-
-# ------------------------------------------------------------
-# 在创建量化管线之前,我们需要初始化执行器,它用于模拟硬件并执行你的网络
-# 请注意,执行器需要对网络结果进行分析并缓存分析结果,如果你的网络结构发生变化
-# 你必须重新建立新的执行器。在上一步操作中,我们对算子进行了量化,这使得
-# 普通的算子被量化算子替代,这一步操作将会改变网络结构。因此我们必须在其后建立执行器。
-# ------------------------------------------------------------
-collate_fn = lambda x: x.cuda()
-executor = TorchExecutor(graph=graph, device='cuda')
-executor.tracing_operation_meta(inputs=collate_fn(dataset[0]))
-executor.load_graph(graph=graph)
-
-# ------------------------------------------------------------
-# 如果在你的模型中存在 NMS 算子 ———— PPQ 不知道如何计算这个玩意,但它跟量化也没啥关系
-# 因此你可以注册一个假的 NMS forward 函数给 PPQ,帮助我们完成网络的前向传播流程
-# ------------------------------------------------------------
-from ppq.api import register_operation_handler
-def nms_forward_function(op: Operation, values: List[torch.Tensor], **kwards) -> List[torch.Tensor]:
- return (
- torch.zeros([1, 1], dtype=torch.int32).cuda(),
- torch.zeros([1, 100, 4],dtype=torch.float32).cuda(),
- torch.zeros([1, 100],dtype=torch.float32).cuda(),
- torch.zeros([1, 100], dtype=torch.int32).cuda()
- )
-register_operation_handler(nms_forward_function, 'EfficientNMS_TRT', platform=TargetPlatform.FP32)
-
-# ------------------------------------------------------------
-# 下面的过程将创建量化管线,它还是一个 PPQ 的核心类型
-# 在 PPQ 中,模型的量化是由一个一个的量化过程(QuantizationOptimizationPass)完成的
-# 量化管线 是 量化过程 的集合,在其中的量化过程将被逐个调用
-# 从而实现对 TQC 中内容的修改,最终实现模型的量化
-# 在这里我们为管线中添加了 7 个量化过程,分别处理不同的内容
-
-# QuantizeSimplifyPass - 用于移除网络中的冗余量化信息
-# QuantizeFusionPass - 用于调整量化信息状态,从而模拟推理图融合
-# ParameterQuantizePass - 用于为模型中的所有参数执行 Calibration, 生成它们的 scale,并将对应 TQC 的状态调整为 ACTIVED
-# RuntimeCalibrationPass - 用于为模型中的所有激活执行 Calibration, 生成它们的 scale,并将对应 TQC 的状态调整为 ACTIVED
-# QuantAlignmentPass - 用于执行 concat, add, sum, sub, pooling 算子的定点对齐
-# LearnedStepSizePass - 用于训练微调模型的权重,从而降低量化误差
-# ParameterBakingPass - 用于执行模型参数烘焙
-
-# 在 PPQ 中我们提供了数十种不同的 QuantizationOptimizationPass
-# 你可以组合它们从而实现自定义的功能,也可以继承 QuantizationOptimizationPass 基类
-# 从而创造出新的量化优化过程
-# ------------------------------------------------------------
-pipeline = PFL.Pipeline([
- QuantizeSimplifyPass(),
- QuantizeFusionPass(
- activation_type=quantizer.activation_fusion_types),
- ParameterQuantizePass(),
- RuntimeCalibrationPass(),
- QuantAlignmentPass(force_overlap=True),
- LearnedStepSizePass(
- steps=1000, is_scale_trainable=True,
- lr=1e-5, block_size=4, collecting_device='cuda'),
- ParameterBakingPass()
-])
-
-with ENABLE_CUDA_KERNEL():
- # 调用管线完成量化
- pipeline.optimize(
- graph=graph, dataloader=dataset, verbose=True,
- calib_steps=32, collate_fn=collate_fn, executor=executor)
-
- # 执行量化误差分析
- graphwise_error_analyse(
- graph=graph, running_device='cuda',
- dataloader=dataset, collate_fn=collate_fn)
-
-# ------------------------------------------------------------
-# 在最后,我们导出计算图
-# 同样地,我们根据不同推理框架的需要,写了一堆不同的网络导出逻辑
-# 你通过参数 platform 告诉 PPQ 你的模型最终将部署在何处,
-# PPQ 则会返回一个对应的 GraphExporter 对象,它将负责将 PPQ 的量化信息
-# 翻译成推理框架所需的内容。你也可以自己写一个 GraphExporter 类并注册到 PPQ 框架中来。
-# ------------------------------------------------------------
-exporter = PFL.Exporter(platform=TargetPlatform.TRT_INT8)
-exporter.export(file_path='Quantized.onnx', config_path='Quantized.json', graph=graph)
-
-# ------------------------------------------------------------
-# 导出所需的 onnx 和 json 文件之后,你可以调用在这个文件旁边的 write_qparams_onnx2trt.py 生成 engine
-#
-# 你需要注意到,我们生成的 onnx 和 json 文件是可以随时迁移的,但 engine 一旦编译完成则不能迁移
-# https://github.com/openppl-public/ppq/blob/master/md_doc/deploy_trt_by_OnnxParser.md
-#
-# 性能分析脚本 https://github.com/openppl-public/ppq/blob/master/ppq/samples/TensorRT/Example_Profiling.py
-# ------------------------------------------------------------
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/ppq/write_qparams_onnx2trt.py b/cv/detection/yolov6/pytorch/tools/quantization/ppq/write_qparams_onnx2trt.py
deleted file mode 100644
index 7b48dc8bcc8beb6218fa3f084016bf14fa278a5e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/ppq/write_qparams_onnx2trt.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import os
-import json
-import argparse
-import tensorrt as trt
-
-TRT_LOGGER = trt.Logger()
-
-EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
-
-def GiB(val):
- return val * 1 << 30
-
-def json_load(filename):
- with open(filename) as json_file:
- data = json.load(json_file)
- return data
-
-def setDynamicRange(network, json_file):
- """Sets ranges for network layers."""
- quant_param_json = json_load(json_file)
- act_quant = quant_param_json["act_quant_info"]
-
- for i in range(network.num_inputs):
- input_tensor = network.get_input(i)
- if act_quant.__contains__(input_tensor.name):
- print(input_tensor.name)
- value = act_quant[input_tensor.name]
- tensor_max = abs(value)
- tensor_min = -abs(value)
- input_tensor.dynamic_range = (tensor_min, tensor_max)
-
- for i in range(network.num_layers):
- layer = network.get_layer(i)
-
- for output_index in range(layer.num_outputs):
- tensor = layer.get_output(output_index)
-
- if act_quant.__contains__(tensor.name):
- print("\033[1;32mWrite quantization parameters:%s\033[0m" % tensor.name)
- value = act_quant[tensor.name]
- tensor_max = abs(value)
- tensor_min = -abs(value)
- tensor.dynamic_range = (tensor_min, tensor_max)
- else:
- print("\033[1;31mNo quantization parameters are written: %s\033[0m" % tensor.name)
-
-
-def build_engine(onnx_file, json_file, engine_file):
- builder = trt.Builder(TRT_LOGGER)
- network = builder.create_network(EXPLICIT_BATCH)
-
- config = builder.create_builder_config()
-
- # If it is a dynamic onnx model , you need to add the following.
- # profile = builder.create_optimization_profile()
- # profile.set_shape("input_name", (batch, channels, min_h, min_w), (batch, channels, opt_h, opt_w), (batch, channels, max_h, max_w))
- # config.add_optimization_profile(profile)
-
-
- parser = trt.OnnxParser(network, TRT_LOGGER)
- config.max_workspace_size = GiB(1)
-
- if not os.path.exists(onnx_file):
- quit('ONNX file {} not found'.format(onnx_file))
-
- with open(onnx_file, 'rb') as model:
- if not parser.parse(model.read()):
- print('ERROR: Failed to parse the ONNX file.')
- for error in range(parser.num_errors):
- print(parser.get_error(error))
- return None
-
- config.set_flag(trt.BuilderFlag.INT8)
-
- setDynamicRange(network, json_file)
-
- engine = builder.build_engine(network, config)
-
- with open(engine_file, "wb") as f:
- f.write(engine.serialize())
-
-
-if __name__ == '__main__':
- # Add plugins if needed
- # import ctypes
- # ctypes.CDLL("libmmdeploy_tensorrt_ops.so")
- parser = argparse.ArgumentParser(description='Writing qparams to onnx to convert tensorrt engine.')
- parser.add_argument('--onnx', type=str, default=None)
- parser.add_argument('--qparam_json', type=str, default=None)
- parser.add_argument('--engine', type=str, default=None)
- arg = parser.parse_args()
-
- build_engine(arg.onnx, arg.qparam_json, arg.engine)
- print("\033[1;32mgenerate %s\033[0m" % arg.engine)
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/Calibrator.py b/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/Calibrator.py
deleted file mode 100755
index efe358dd1e95be091c7ce4a6214f40cf206751bd..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/Calibrator.py
+++ /dev/null
@@ -1,211 +0,0 @@
-#
-# Modified by Meituan
-# 2022.6.24
-#
-
-# Copyright 2019 NVIDIA Corporation
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import sys
-import glob
-import random
-import logging
-import cv2
-
-import numpy as np
-from PIL import Image
-import tensorrt as trt
-import pycuda.driver as cuda
-import pycuda.autoinit
-
-logging.basicConfig(level=logging.DEBUG,
- format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S")
-logger = logging.getLogger(__name__)
-
-def preprocess_yolov6(image, channels=3, height=224, width=224):
- """Pre-processing for YOLOv6-based Object Detection Models
-
- Parameters
- ----------
- image: PIL.Image
- The image resulting from PIL.Image.open(filename) to preprocess
- channels: int
- The number of channels the image has (Usually 1 or 3)
- height: int
- The desired height of the image (usually 640)
- width: int
- The desired width of the image (usually 640)
-
- Returns
- -------
- img_data: numpy array
- The preprocessed image data in the form of a numpy array
-
- """
- # Get the image in CHW format
- resized_image = image.resize((width, height), Image.BILINEAR)
- img_data = np.asarray(resized_image).astype(np.float32)
-
- if len(img_data.shape) == 2:
- # For images without a channel dimension, we stack
- img_data = np.stack([img_data] * 3)
- logger.debug("Received grayscale image. Reshaped to {:}".format(img_data.shape))
- else:
- img_data = img_data.transpose([2, 0, 1])
-
- mean_vec = np.array([0.0, 0.0, 0.0])
- stddev_vec = np.array([1.0, 1.0, 1.0])
- assert img_data.shape[0] == channels
-
- for i in range(img_data.shape[0]):
- # Scale each pixel to [0, 1] and normalize per channel.
- img_data[i, :, :] = (img_data[i, :, :] / 255.0 - mean_vec[i]) / stddev_vec[i]
-
- return img_data
-
-
-def get_int8_calibrator(calib_cache, calib_data, max_calib_size, calib_batch_size):
- # Use calibration cache if it exists
- if os.path.exists(calib_cache):
- logger.info("Skipping calibration files, using calibration cache: {:}".format(calib_cache))
- calib_files = []
- # Use calibration files from validation dataset if no cache exists
- else:
- if not calib_data:
- raise ValueError("ERROR: Int8 mode requested, but no calibration data provided. Please provide --calibration-data /path/to/calibration/files")
-
- calib_files = get_calibration_files(calib_data, max_calib_size)
-
- # Choose pre-processing function for INT8 calibration
- preprocess_func = preprocess_yolov6
-
- int8_calibrator = ImageCalibrator(calibration_files=calib_files,
- batch_size=calib_batch_size,
- cache_file=calib_cache)
- return int8_calibrator
-
-
-def get_calibration_files(calibration_data, max_calibration_size=None, allowed_extensions=(".jpeg", ".jpg", ".png")):
- """Returns a list of all filenames ending with `allowed_extensions` found in the `calibration_data` directory.
-
- Parameters
- ----------
- calibration_data: str
- Path to directory containing desired files.
- max_calibration_size: int
- Max number of files to use for calibration. If calibration_data contains more than this number,
- a random sample of size max_calibration_size will be returned instead. If None, all samples will be used.
-
- Returns
- -------
- calibration_files: List[str]
- List of filenames contained in the `calibration_data` directory ending with `allowed_extensions`.
- """
-
- logger.info("Collecting calibration files from: {:}".format(calibration_data))
- calibration_files = [path for path in glob.iglob(os.path.join(calibration_data, "**"), recursive=True)
- if os.path.isfile(path) and path.lower().endswith(allowed_extensions)]
- logger.info("Number of Calibration Files found: {:}".format(len(calibration_files)))
-
- if len(calibration_files) == 0:
- raise Exception("ERROR: Calibration data path [{:}] contains no files!".format(calibration_data))
-
- if max_calibration_size:
- if len(calibration_files) > max_calibration_size:
- logger.warning("Capping number of calibration images to max_calibration_size: {:}".format(max_calibration_size))
- random.seed(42) # Set seed for reproducibility
- calibration_files = random.sample(calibration_files, max_calibration_size)
-
- return calibration_files
-
-
-# https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/infer/Int8/EntropyCalibrator2.html
-class ImageCalibrator(trt.IInt8EntropyCalibrator2):
- """INT8 Calibrator Class for Imagenet-based Image Classification Models.
-
- Parameters
- ----------
- calibration_files: List[str]
- List of image filenames to use for INT8 Calibration
- batch_size: int
- Number of images to pass through in one batch during calibration
- input_shape: Tuple[int]
- Tuple of integers defining the shape of input to the model (Default: (3, 224, 224))
- cache_file: str
- Name of file to read/write calibration cache from/to.
- preprocess_func: function -> numpy.ndarray
- Pre-processing function to run on calibration data. This should match the pre-processing
- done at inference time. In general, this function should return a numpy array of
- shape `input_shape`.
- """
-
- def __init__(self, calibration_files=[], batch_size=32, input_shape=(3, 224, 224),
- cache_file="calibration.cache", use_cv2=False):
- super().__init__()
- self.input_shape = input_shape
- self.cache_file = cache_file
- self.batch_size = batch_size
- self.batch = np.zeros((self.batch_size, *self.input_shape), dtype=np.float32)
- self.device_input = cuda.mem_alloc(self.batch.nbytes)
-
- self.files = calibration_files
- self.use_cv2 = use_cv2
- # Pad the list so it is a multiple of batch_size
- if len(self.files) % self.batch_size != 0:
- logger.info("Padding # calibration files to be a multiple of batch_size {:}".format(self.batch_size))
- self.files += calibration_files[(len(calibration_files) % self.batch_size):self.batch_size]
-
- self.batches = self.load_batches()
- self.preprocess_func = preprocess_yolov6
-
- def load_batches(self):
- # Populates a persistent self.batch buffer with images.
- for index in range(0, len(self.files), self.batch_size):
- for offset in range(self.batch_size):
- if self.use_cv2:
- image = cv2.imread(self.files[index + offset])
- else:
- image = Image.open(self.files[index + offset])
- self.batch[offset] = self.preprocess_func(image, *self.input_shape)
- logger.info("Calibration images pre-processed: {:}/{:}".format(index+self.batch_size, len(self.files)))
- yield self.batch
-
- def get_batch_size(self):
- return self.batch_size
-
- def get_batch(self, names):
- try:
- # Assume self.batches is a generator that provides batch data.
- batch = next(self.batches)
- # Assume that self.device_input is a device buffer allocated by the constructor.
- cuda.memcpy_htod(self.device_input, batch)
- return [int(self.device_input)]
- except StopIteration:
- # When we're out of batches, we return either [] or None.
- # This signals to TensorRT that there is no calibration data remaining.
- return None
-
- def read_calibration_cache(self):
- # If there is a cache, use it instead of calibrating again. Otherwise, implicitly return None.
- if os.path.exists(self.cache_file):
- with open(self.cache_file, "rb") as f:
- logger.info("Using calibration cache to save time: {:}".format(self.cache_file))
- return f.read()
-
- def write_calibration_cache(self, cache):
- with open(self.cache_file, "wb") as f:
- logger.info("Caching calibration data for future use: {:}".format(self.cache_file))
- f.write(cache)
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/LICENSE b/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/LICENSE
deleted file mode 100644
index 604095e5cfadf9d941c8f6abf7cda5d7e10ef89c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/LICENSE
+++ /dev/null
@@ -1,191 +0,0 @@
-
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- Copyright 2020 NVIDIA Corporation
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/README.md b/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/README.md
deleted file mode 100644
index e2624aa4a2305f75539c0bb8edc2dc7a82b199b9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/README.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# ONNX -> TensorRT INT8
-These scripts were last tested using the
-[NGC TensorRT Container Version 20.06-py3](https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt).
-You can see the corresponding framework versions for this container [here](https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_20.06.html#rel_20.06).
-
-## Quickstart
-
-> **NOTE**: This INT8 example is only valid for **fixed-shape** ONNX models at the moment.
->
-INT8 Calibration on **dynamic-shape** models is now supported, however this example has not been updated
-to reflect that yet. For more details on INT8 Calibration for **dynamic-shape** models, please
-see the [documentation](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#int8-calib-dynamic-shapes).
-
-### 1. Convert ONNX model to TensorRT INT8
-
-See `./onnx_to_tensorrt.py -h` for full list of command line arguments.
-
-```bash
-./onnx_to_tensorrt.py --explicit-batch \
- --onnx resnet50/model.onnx \
- --fp16 \
- --int8 \
- --calibration-cache="caches/yolov6.cache" \
- -o resnet50.int8.engine
-```
-
-See the [INT8 Calibration](#int8-calibration) section below for details on calibration
-using your own model or different data, where you don't have an existing calibration cache
-or want to create a new one.
-
-## INT8 Calibration
-
-See [Calibrator.py](Calibrator.py) for a reference implementation
-of TensorRT's [IInt8EntropyCalibrator2](https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/infer/Int8/EntropyCalibrator2.html).
-
-This class can be tweaked to work for other kinds of models, inputs, etc.
-
-In the [Quickstart](#quickstart) section above, we made use of a pre-existing cache,
-[caches/yolov6.cache](caches/yolov6.cache), to save time for the sake of an example.
-
-However, to calibrate using different data or a different model, you can do so with the `--calibration-data` argument.
-
-* This requires that you've mounted a dataset, such as Imagenet, to use for calibration.
- * Add something like `-v /imagenet:/imagenet` to your Docker command in Step (1)
- to mount a dataset found locally at `/imagenet`.
-* You can specify your own `preprocess_func` by defining it inside of `Calibrator.py`
-
-```bash
-# Path to dataset to use for calibration.
-# **Not necessary if you already have a calibration cache from a previous run.
-CALIBRATION_DATA="/imagenet"
-
-# Truncate calibration images to a random sample of this amount if more are found.
-# **Not necessary if you already have a calibration cache from a previous run.
-MAX_CALIBRATION_SIZE=512
-
-# Calibration cache to be used instead of calibration data if it already exists,
-# or the cache will be created from the calibration data if it doesn't exist.
-CACHE_FILENAME="caches/yolov6.cache"
-
-# Path to ONNX model
-ONNX_MODEL="model/yolov6.onnx"
-
-# Path to write TensorRT engine to
-OUTPUT="yolov6.int8.engine"
-
-# Creates an int8 engine from your ONNX model, creating ${CACHE_FILENAME} based
-# on your ${CALIBRATION_DATA}, unless ${CACHE_FILENAME} already exists, then
-# it will use simply use that instead.
-python3 onnx_to_tensorrt.py --fp16 --int8 -v \
- --max_calibration_size=${MAX_CALIBRATION_SIZE} \
- --calibration-data=${CALIBRATION_DATA} \
- --calibration-cache=${CACHE_FILENAME} \
- --preprocess_func=${PREPROCESS_FUNC} \
- --explicit-batch \
- --onnx ${ONNX_MODEL} -o ${OUTPUT}
-
-```
-
-### Pre-processing
-
-In order to calibrate your model correctly, you should `pre-process` your data the same way
-that you would during inference.
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/onnx_to_tensorrt.py b/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/onnx_to_tensorrt.py
deleted file mode 100755
index 48c4fcb552f79cded3bd35e99b755e7de259b33b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/post_training/onnx_to_tensorrt.py
+++ /dev/null
@@ -1,222 +0,0 @@
-#!/usr/bin/env python3
-
-#
-# Modified by Meituan
-# 2022.6.24
-#
-
-# Copyright 2019 NVIDIA Corporation
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import sys
-import glob
-import math
-import logging
-import argparse
-
-import tensorrt as trt
-#sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')
-
-TRT_LOGGER = trt.Logger()
-logging.basicConfig(level=logging.DEBUG,
- format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S")
-logger = logging.getLogger(__name__)
-
-
-def add_profiles(config, inputs, opt_profiles):
- logger.debug("=== Optimization Profiles ===")
- for i, profile in enumerate(opt_profiles):
- for inp in inputs:
- _min, _opt, _max = profile.get_shape(inp.name)
- logger.debug("{} - OptProfile {} - Min {} Opt {} Max {}".format(inp.name, i, _min, _opt, _max))
- config.add_optimization_profile(profile)
-
-
-def mark_outputs(network):
- # Mark last layer's outputs if not already marked
- # NOTE: This may not be correct in all cases
- last_layer = network.get_layer(network.num_layers-1)
- if not last_layer.num_outputs:
- logger.error("Last layer contains no outputs.")
- return
-
- for i in range(last_layer.num_outputs):
- network.mark_output(last_layer.get_output(i))
-
-
-def check_network(network):
- if not network.num_outputs:
- logger.warning("No output nodes found, marking last layer's outputs as network outputs. Correct this if wrong.")
- mark_outputs(network)
-
- inputs = [network.get_input(i) for i in range(network.num_inputs)]
- outputs = [network.get_output(i) for i in range(network.num_outputs)]
- max_len = max([len(inp.name) for inp in inputs] + [len(out.name) for out in outputs])
-
- logger.debug("=== Network Description ===")
- for i, inp in enumerate(inputs):
- logger.debug("Input {0} | Name: {1:{2}} | Shape: {3}".format(i, inp.name, max_len, inp.shape))
- for i, out in enumerate(outputs):
- logger.debug("Output {0} | Name: {1:{2}} | Shape: {3}".format(i, out.name, max_len, out.shape))
-
-
-def get_batch_sizes(max_batch_size):
- # Returns powers of 2, up to and including max_batch_size
- max_exponent = math.log2(max_batch_size)
- for i in range(int(max_exponent)+1):
- batch_size = 2**i
- yield batch_size
-
- if max_batch_size != batch_size:
- yield max_batch_size
-
-
-# TODO: This only covers dynamic shape for batch size, not dynamic shape for other dimensions
-def create_optimization_profiles(builder, inputs, batch_sizes=[1,8,16,32,64]):
- # Check if all inputs are fixed explicit batch to create a single profile and avoid duplicates
- if all([inp.shape[0] > -1 for inp in inputs]):
- profile = builder.create_optimization_profile()
- for inp in inputs:
- fbs, shape = inp.shape[0], inp.shape[1:]
- profile.set_shape(inp.name, min=(fbs, *shape), opt=(fbs, *shape), max=(fbs, *shape))
- return [profile]
-
- # Otherwise for mixed fixed+dynamic explicit batch inputs, create several profiles
- profiles = {}
- for bs in batch_sizes:
- if not profiles.get(bs):
- profiles[bs] = builder.create_optimization_profile()
-
- for inp in inputs:
- shape = inp.shape[1:]
- # Check if fixed explicit batch
- if inp.shape[0] > -1:
- bs = inp.shape[0]
-
- profiles[bs].set_shape(inp.name, min=(bs, *shape), opt=(bs, *shape), max=(bs, *shape))
-
- return list(profiles.values())
-
-
-def main():
- parser = argparse.ArgumentParser(description="Creates a TensorRT engine from the provided ONNX file.\n")
- parser.add_argument("--onnx", required=True, help="The ONNX model file to convert to TensorRT")
- parser.add_argument("-o", "--output", type=str, default="model.engine", help="The path at which to write the engine")
- parser.add_argument("-b", "--max-batch-size", type=int, help="The max batch size for the TensorRT engine input")
- parser.add_argument("-v", "--verbosity", action="count", help="Verbosity for logging. (None) for ERROR, (-v) for INFO/WARNING/ERROR, (-vv) for VERBOSE.")
- parser.add_argument("--explicit-batch", action='store_true', help="Set trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH.")
- parser.add_argument("--explicit-precision", action='store_true', help="Set trt.NetworkDefinitionCreationFlag.EXPLICIT_PRECISION.")
- parser.add_argument("--gpu-fallback", action='store_true', help="Set trt.BuilderFlag.GPU_FALLBACK.")
- parser.add_argument("--refittable", action='store_true', help="Set trt.BuilderFlag.REFIT.")
- parser.add_argument("--debug", action='store_true', help="Set trt.BuilderFlag.DEBUG.")
- parser.add_argument("--strict-types", action='store_true', help="Set trt.BuilderFlag.STRICT_TYPES.")
- parser.add_argument("--fp16", action="store_true", help="Attempt to use FP16 kernels when possible.")
- parser.add_argument("--int8", action="store_true", help="Attempt to use INT8 kernels when possible. This should generally be used in addition to the --fp16 flag. \
- ONLY SUPPORTS RESNET-LIKE MODELS SUCH AS RESNET50/VGG16/INCEPTION/etc.")
- parser.add_argument("--calibration-cache", help="(INT8 ONLY) The path to read/write from calibration cache.", default="calibration.cache")
- parser.add_argument("--calibration-data", help="(INT8 ONLY) The directory containing {*.jpg, *.jpeg, *.png} files to use for calibration. (ex: Imagenet Validation Set)", default=None)
- parser.add_argument("--calibration-batch-size", help="(INT8 ONLY) The batch size to use during calibration.", type=int, default=128)
- parser.add_argument("--max-calibration-size", help="(INT8 ONLY) The max number of data to calibrate on from --calibration-data.", type=int, default=2048)
- parser.add_argument("-s", "--simple", action="store_true", help="Use SimpleCalibrator with random data instead of ImagenetCalibrator for INT8 calibration.")
- args, _ = parser.parse_known_args()
-
- print(args)
-
- # Adjust logging verbosity
- if args.verbosity is None:
- TRT_LOGGER.min_severity = trt.Logger.Severity.ERROR
- # -v
- elif args.verbosity == 1:
- TRT_LOGGER.min_severity = trt.Logger.Severity.INFO
- # -vv
- else:
- TRT_LOGGER.min_severity = trt.Logger.Severity.VERBOSE
- logger.info("TRT_LOGGER Verbosity: {:}".format(TRT_LOGGER.min_severity))
-
- # Network flags
- network_flags = 0
- if args.explicit_batch:
- network_flags |= 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
- if args.explicit_precision:
- network_flags |= 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_PRECISION)
-
- builder_flag_map = {
- 'gpu_fallback': trt.BuilderFlag.GPU_FALLBACK,
- 'refittable': trt.BuilderFlag.REFIT,
- 'debug': trt.BuilderFlag.DEBUG,
- 'strict_types': trt.BuilderFlag.STRICT_TYPES,
- 'fp16': trt.BuilderFlag.FP16,
- 'int8': trt.BuilderFlag.INT8,
- }
-
- # Building engine
- with trt.Builder(TRT_LOGGER) as builder, \
- builder.create_network(network_flags) as network, \
- builder.create_builder_config() as config, \
- trt.OnnxParser(network, TRT_LOGGER) as parser:
-
- config.max_workspace_size = 2**30 # 1GiB
-
- # Set Builder Config Flags
- for flag in builder_flag_map:
- if getattr(args, flag):
- logger.info("Setting {}".format(builder_flag_map[flag]))
- config.set_flag(builder_flag_map[flag])
-
- # Fill network atrributes with information by parsing model
- with open(args.onnx, "rb") as f:
- if not parser.parse(f.read()):
- print('ERROR: Failed to parse the ONNX file: {}'.format(args.onnx))
- for error in range(parser.num_errors):
- print(parser.get_error(error))
- sys.exit(1)
-
- # Display network info and check certain properties
- check_network(network)
-
- if args.explicit_batch:
- # Add optimization profiles
- batch_sizes = [1, 8, 16, 32, 64]
- inputs = [network.get_input(i) for i in range(network.num_inputs)]
- opt_profiles = create_optimization_profiles(builder, inputs, batch_sizes)
- add_profiles(config, inputs, opt_profiles)
- # Implicit Batch Network
- else:
- builder.max_batch_size = args.max_batch_size
- opt_profiles = []
-
- # Precision flags
- if args.fp16 and not builder.platform_has_fast_fp16:
- logger.warning("FP16 not supported on this platform.")
-
- if args.int8 and not builder.platform_has_fast_int8:
- logger.warning("INT8 not supported on this platform.")
-
- if args.int8:
- from Calibrator import ImageCalibrator, get_int8_calibrator # local module
- config.int8_calibrator = get_int8_calibrator(args.calibration_cache,
- args.calibration_data,
- args.max_calibration_size,
- args.calibration_batch_size)
-
- logger.info("Building Engine...")
- with builder.build_engine(network, config) as engine, open(args.output, "wb") as f:
- logger.info("Serializing engine to file: {:}".format(args.output))
- f.write(engine.serialize())
-
-
-if __name__ == "__main__":
- main()
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/requirements.txt b/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/requirements.txt
deleted file mode 100644
index 5473d1024d84d5297a45369ddc9efdecec4c5da5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/requirements.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-# pip install -r requirements.txt
-# python3.8 environment
-
-tensorrt # TensorRT 8.0+
-pycuda==2020.1 # CUDA 11.0
-nvidia-pyindex
-pytorch-quantization
diff --git a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/training_aware/QAT_quantizer.py b/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/training_aware/QAT_quantizer.py
deleted file mode 100644
index 356330fa5a284bc0052986e74b7df2e8233c9588..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/quantization/tensorrt/training_aware/QAT_quantizer.py
+++ /dev/null
@@ -1,39 +0,0 @@
-#
-# QAT_quantizer.py
-# YOLOv6
-#
-# Created by Meituan on 2022/06/24.
-# Copyright © 2022
-#
-
-from absl import logging
-from pytorch_quantization import nn as quant_nn
-from pytorch_quantization import quant_modules
-
-# Call this function before defining the model
-def tensorrt_official_qat():
- # Quantization Aware Training is based on Straight Through Estimator (STE) derivative approximation.
- # It is some time known as “quantization aware training”.
-
- # PyTorch-Quantization is a toolkit for training and evaluating PyTorch models with simulated quantization.
- # Quantization can be added to the model automatically, or manually, allowing the model to be tuned for accuracy and performance.
- # Quantization is compatible with NVIDIAs high performance integer kernels which leverage integer Tensor Cores.
- # The quantized model can be exported to ONNX and imported by TensorRT 8.0 and later.
- # https://github.com/NVIDIA/TensorRT/blob/main/tools/pytorch-quantization/examples/finetune_quant_resnet50.ipynb
-
- # The example to export the
- # model.eval()
- # quant_nn.TensorQuantizer.use_fb_fake_quant = True # We have to shift to pytorch's fake quant ops before exporting the model to ONNX
- # opset_version = 13
-
- # Export ONNX for multiple batch sizes
- # print("Creating ONNX file: " + onnx_filename)
- # dummy_input = torch.randn(batch_onnx, 3, 224, 224, device='cuda') #TODO: switch input dims by model
- # torch.onnx.export(model, dummy_input, onnx_filename, verbose=False, opset_version=opset_version, enable_onnx_checker=False, do_constant_folding=True)
- try:
- quant_modules.initialize()
- except NameError:
- logging.info("initialzation error for quant_modules")
-
-# def QAT_quantizer():
-# coming soon
diff --git a/cv/detection/yolov6/pytorch/tools/train.py b/cv/detection/yolov6/pytorch/tools/train.py
deleted file mode 100644
index 635c68e4710581585a270b276759d1a6e6dfa873..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/tools/train.py
+++ /dev/null
@@ -1,142 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import argparse
-from logging import Logger
-import os
-import yaml
-import os.path as osp
-from pathlib import Path
-import torch
-import torch.distributed as dist
-import sys
-import datetime
-
-ROOT = os.getcwd()
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT))
-
-from yolov6.core.engine import Trainer
-from yolov6.utils.config import Config
-from yolov6.utils.events import LOGGER, save_yaml
-from yolov6.utils.envs import get_envs, select_device, set_random_seed
-from yolov6.utils.general import increment_name, find_latest_checkpoint, check_img_size
-
-
-def get_args_parser(add_help=True):
- parser = argparse.ArgumentParser(description='YOLOv6 PyTorch Training', add_help=add_help)
- parser.add_argument('--data-path', default='./data/coco.yaml', type=str, help='path of dataset')
- parser.add_argument('--conf-file', default='./configs/yolov6n.py', type=str, help='experiments description file')
- parser.add_argument('--img-size', default=640, type=int, help='train, val image size (pixels)')
- parser.add_argument('--rect', action='store_true', help='whether to use rectangular training, default is False')
- parser.add_argument('--batch-size', default=32, type=int, help='total batch size for all GPUs')
- parser.add_argument('--epochs', default=400, type=int, help='number of total epochs to run')
- parser.add_argument('--workers', default=8, type=int, help='number of data loading workers (default: 8)')
- parser.add_argument('--device', default='0', type=str, help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--eval-interval', default=20, type=int, help='evaluate at every interval epochs')
- parser.add_argument('--eval-final-only', action='store_true', help='only evaluate at the final epoch')
- parser.add_argument('--heavy-eval-range', default=50, type=int,
- help='evaluating every epoch for last such epochs (can be jointly used with --eval-interval)')
- parser.add_argument('--check-images', action='store_true', help='check images when initializing datasets')
- parser.add_argument('--check-labels', action='store_true', help='check label files when initializing datasets')
- parser.add_argument('--output-dir', default='./runs/train', type=str, help='path to save outputs')
- parser.add_argument('--name', default='exp', type=str, help='experiment name, saved to output_dir/name')
- parser.add_argument('--dist_url', default='env://', type=str, help='url used to set up distributed training')
- parser.add_argument('--gpu_count', type=int, default=0)
- parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume the most recent training')
- parser.add_argument('--write_trainbatch_tb', action='store_true', help='write train_batch image to tensorboard once an epoch, may slightly slower train speed if open')
- parser.add_argument('--stop_aug_last_n_epoch', default=15, type=int, help='stop strong aug at last n epoch, neg value not stop, default 15')
- parser.add_argument('--save_ckpt_on_last_n_epoch', default=-1, type=int, help='save last n epoch even not best or last, neg value not save')
- parser.add_argument('--distill', action='store_true', help='distill or not')
- parser.add_argument('--distill_feat', action='store_true', help='distill featmap or not')
- parser.add_argument('--quant', action='store_true', help='quant or not')
- parser.add_argument('--calib', action='store_true', help='run ptq')
- parser.add_argument('--teacher_model_path', type=str, default=None, help='teacher model path')
- parser.add_argument('--temperature', type=int, default=20, help='distill temperature')
- parser.add_argument('--fuse_ab', action='store_true', help='fuse ab branch in training process or not')
- parser.add_argument('--bs_per_gpu', default=32, type=int, help='batch size per GPU for auto-rescale learning rate, set to 16 for P6 models')
- parser.add_argument('--specific-shape', action='store_true', help='rectangular training')
- parser.add_argument('--height', type=int, default=None, help='image height of model input')
- parser.add_argument('--width', type=int, default=None, help='image width of model input')
- return parser
-
-
-def check_and_init(args):
- '''check config files and device.'''
- # check files
- master_process = args.rank == 0 if args.world_size > 1 else args.rank == -1
- if args.resume:
- # args.resume can be a checkpoint file path or a boolean value.
- checkpoint_path = args.resume if isinstance(args.resume, str) else find_latest_checkpoint()
- assert os.path.isfile(checkpoint_path), f'the checkpoint path is not exist: {checkpoint_path}'
- LOGGER.info(f'Resume training from the checkpoint file :{checkpoint_path}')
- resume_opt_file_path = Path(checkpoint_path).parent.parent / 'args.yaml'
- if osp.exists(resume_opt_file_path):
- with open(resume_opt_file_path) as f:
- args = argparse.Namespace(**yaml.safe_load(f)) # load args value from args.yaml
- else:
- LOGGER.warning(f'We can not find the path of {Path(checkpoint_path).parent.parent / "args.yaml"},'\
- f' we will save exp log to {Path(checkpoint_path).parent.parent}')
- LOGGER.warning(f'In this case, make sure to provide configuration, such as data, batch size.')
- args.save_dir = str(Path(checkpoint_path).parent.parent)
- args.resume = checkpoint_path # set the args.resume to checkpoint path.
- else:
- args.save_dir = str(increment_name(osp.join(args.output_dir, args.name)))
- if master_process:
- os.makedirs(args.save_dir)
-
- # check specific shape
- if args.specific_shape:
- if args.rect:
- LOGGER.warning('You set specific shape, and rect to True is needless. YOLOv6 will use the specific shape to train.')
- args.height = check_img_size(args.height, 32, floor=256) # verify imgsz is gs-multiple
- args.width = check_img_size(args.width, 32, floor=256)
- else:
- args.img_size = check_img_size(args.img_size, 32, floor=256)
-
- cfg = Config.fromfile(args.conf_file)
- if not hasattr(cfg, 'training_mode'):
- setattr(cfg, 'training_mode', 'repvgg')
- # check device
- device = select_device(args.device)
- # set random seed
- set_random_seed(1+args.rank, deterministic=(args.rank == -1))
- # save args
- if master_process:
- save_yaml(vars(args), osp.join(args.save_dir, 'args.yaml'))
-
- return cfg, device, args
-
-
-def main(args):
- '''main function of training'''
- # Setup
- args.local_rank, args.rank, args.world_size = get_envs()
- cfg, device, args = check_and_init(args)
- # reload envs because args was chagned in check_and_init(args)
- args.local_rank, args.rank, args.world_size = get_envs()
- LOGGER.info(f'training args are: {args}\n')
- if args.local_rank != -1: # if DDP mode
- torch.cuda.set_device(args.local_rank)
- device = torch.device('cuda', args.local_rank)
- LOGGER.info('Initializing process group... ')
- dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", \
- init_method=args.dist_url, rank=args.local_rank, world_size=args.world_size,timeout=datetime.timedelta(seconds=7200))
-
- # Start
- trainer = Trainer(args, cfg, device)
- # PTQ
- if args.quant and args.calib:
- trainer.calibrate(cfg)
- return
- trainer.train()
-
- # End
- if args.world_size > 1 and args.rank == 0:
- LOGGER.info('Destroying process group... ')
- dist.destroy_process_group()
-
-
-if __name__ == '__main__':
- args = get_args_parser().parse_args()
- main(args)
diff --git a/cv/detection/yolov6/pytorch/yolov6/__init__.py b/cv/detection/yolov6/pytorch/yolov6/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/cv/detection/yolov6/pytorch/yolov6/assigners/__init__.py b/cv/detection/yolov6/pytorch/yolov6/assigners/__init__.py
deleted file mode 100644
index 8c1636e47d45f4b63a67b2e4322d3615f0bf86a9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/assigners/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .atss_assigner import ATSSAssigner
-from .tal_assigner import TaskAlignedAssigner
diff --git a/cv/detection/yolov6/pytorch/yolov6/assigners/anchor_generator.py b/cv/detection/yolov6/pytorch/yolov6/assigners/anchor_generator.py
deleted file mode 100644
index c8276418e11a1b7f860dbd5516ab06c5809cb8da..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/assigners/anchor_generator.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import torch
-from yolov6.utils.general import check_version
-
-torch_1_10_plus = check_version(torch.__version__, minimum='1.10.0')
-
-def generate_anchors(feats, fpn_strides, grid_cell_size=5.0, grid_cell_offset=0.5, device='cpu', is_eval=False, mode='af'):
- '''Generate anchors from features.'''
- anchors = []
- anchor_points = []
- stride_tensor = []
- num_anchors_list = []
- assert feats is not None
- if is_eval:
- for i, stride in enumerate(fpn_strides):
- _, _, h, w = feats[i].shape
- shift_x = torch.arange(end=w, device=device) + grid_cell_offset
- shift_y = torch.arange(end=h, device=device) + grid_cell_offset
- shift_y, shift_x = torch.meshgrid(shift_y, shift_x, indexing='ij') if torch_1_10_plus else torch.meshgrid(shift_y, shift_x)
- anchor_point = torch.stack(
- [shift_x, shift_y], axis=-1).to(torch.float)
- if mode == 'af': # anchor-free
- anchor_points.append(anchor_point.reshape([-1, 2]))
- stride_tensor.append(
- torch.full(
- (h * w, 1), stride, dtype=torch.float, device=device))
- elif mode == 'ab': # anchor-based
- anchor_points.append(anchor_point.reshape([-1, 2]).repeat(3,1))
- stride_tensor.append(
- torch.full(
- (h * w, 1), stride, dtype=torch.float, device=device).repeat(3,1))
- anchor_points = torch.cat(anchor_points)
- stride_tensor = torch.cat(stride_tensor)
- return anchor_points, stride_tensor
- else:
- for i, stride in enumerate(fpn_strides):
- _, _, h, w = feats[i].shape
- cell_half_size = grid_cell_size * stride * 0.5
- shift_x = (torch.arange(end=w, device=device) + grid_cell_offset) * stride
- shift_y = (torch.arange(end=h, device=device) + grid_cell_offset) * stride
- shift_y, shift_x = torch.meshgrid(shift_y, shift_x, indexing='ij') if torch_1_10_plus else torch.meshgrid(shift_y, shift_x)
- anchor = torch.stack(
- [
- shift_x - cell_half_size, shift_y - cell_half_size,
- shift_x + cell_half_size, shift_y + cell_half_size
- ],
- axis=-1).clone().to(feats[0].dtype)
- anchor_point = torch.stack(
- [shift_x, shift_y], axis=-1).clone().to(feats[0].dtype)
-
- if mode == 'af': # anchor-free
- anchors.append(anchor.reshape([-1, 4]))
- anchor_points.append(anchor_point.reshape([-1, 2]))
- elif mode == 'ab': # anchor-based
- anchors.append(anchor.reshape([-1, 4]).repeat(3,1))
- anchor_points.append(anchor_point.reshape([-1, 2]).repeat(3,1))
- num_anchors_list.append(len(anchors[-1]))
- stride_tensor.append(
- torch.full(
- [num_anchors_list[-1], 1], stride, dtype=feats[0].dtype))
- anchors = torch.cat(anchors)
- anchor_points = torch.cat(anchor_points).to(device)
- stride_tensor = torch.cat(stride_tensor).to(device)
- return anchors, anchor_points, num_anchors_list, stride_tensor
diff --git a/cv/detection/yolov6/pytorch/yolov6/assigners/assigner_utils.py b/cv/detection/yolov6/pytorch/yolov6/assigners/assigner_utils.py
deleted file mode 100644
index a10f02a348f22930e485ca4597a258ffa824abae..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/assigners/assigner_utils.py
+++ /dev/null
@@ -1,89 +0,0 @@
-import torch
-import torch.nn.functional as F
-
-def dist_calculator(gt_bboxes, anchor_bboxes):
- """compute center distance between all bbox and gt
-
- Args:
- gt_bboxes (Tensor): shape(bs*n_max_boxes, 4)
- anchor_bboxes (Tensor): shape(num_total_anchors, 4)
- Return:
- distances (Tensor): shape(bs*n_max_boxes, num_total_anchors)
- ac_points (Tensor): shape(num_total_anchors, 2)
- """
- gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0
- gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0
- gt_points = torch.stack([gt_cx, gt_cy], dim=1)
- ac_cx = (anchor_bboxes[:, 0] + anchor_bboxes[:, 2]) / 2.0
- ac_cy = (anchor_bboxes[:, 1] + anchor_bboxes[:, 3]) / 2.0
- ac_points = torch.stack([ac_cx, ac_cy], dim=1)
-
- distances = (gt_points[:, None, :] - ac_points[None, :, :]).pow(2).sum(-1).sqrt()
-
- return distances, ac_points
-
-def select_candidates_in_gts(xy_centers, gt_bboxes, eps=1e-9):
- """select the positive anchors's center in gt
-
- Args:
- xy_centers (Tensor): shape(bs*n_max_boxes, num_total_anchors, 4)
- gt_bboxes (Tensor): shape(bs, n_max_boxes, 4)
- Return:
- (Tensor): shape(bs, n_max_boxes, num_total_anchors)
- """
- n_anchors = xy_centers.size(0)
- bs, n_max_boxes, _ = gt_bboxes.size()
- _gt_bboxes = gt_bboxes.reshape([-1, 4])
- xy_centers = xy_centers.unsqueeze(0).repeat(bs * n_max_boxes, 1, 1)
- gt_bboxes_lt = _gt_bboxes[:, 0:2].unsqueeze(1).repeat(1, n_anchors, 1)
- gt_bboxes_rb = _gt_bboxes[:, 2:4].unsqueeze(1).repeat(1, n_anchors, 1)
- b_lt = xy_centers - gt_bboxes_lt
- b_rb = gt_bboxes_rb - xy_centers
- bbox_deltas = torch.cat([b_lt, b_rb], dim=-1)
- bbox_deltas = bbox_deltas.reshape([bs, n_max_boxes, n_anchors, -1])
- return (bbox_deltas.min(axis=-1)[0] > eps).to(gt_bboxes.dtype)
-
-def select_highest_overlaps(mask_pos, overlaps, n_max_boxes):
- """if an anchor box is assigned to multiple gts,
- the one with the highest iou will be selected.
-
- Args:
- mask_pos (Tensor): shape(bs, n_max_boxes, num_total_anchors)
- overlaps (Tensor): shape(bs, n_max_boxes, num_total_anchors)
- Return:
- target_gt_idx (Tensor): shape(bs, num_total_anchors)
- fg_mask (Tensor): shape(bs, num_total_anchors)
- mask_pos (Tensor): shape(bs, n_max_boxes, num_total_anchors)
- """
- fg_mask = mask_pos.sum(axis=-2)
- if fg_mask.max() > 1:
- mask_multi_gts = (fg_mask.unsqueeze(1) > 1).repeat([1, n_max_boxes, 1])
- max_overlaps_idx = overlaps.argmax(axis=1)
- is_max_overlaps = F.one_hot(max_overlaps_idx, n_max_boxes)
- is_max_overlaps = is_max_overlaps.permute(0, 2, 1).to(overlaps.dtype)
- mask_pos = torch.where(mask_multi_gts, is_max_overlaps, mask_pos)
- fg_mask = mask_pos.sum(axis=-2)
- target_gt_idx = mask_pos.argmax(axis=-2)
- return target_gt_idx, fg_mask , mask_pos
-
-def iou_calculator(box1, box2, eps=1e-9):
- """Calculate iou for batch
-
- Args:
- box1 (Tensor): shape(bs, n_max_boxes, 1, 4)
- box2 (Tensor): shape(bs, 1, num_total_anchors, 4)
- Return:
- (Tensor): shape(bs, n_max_boxes, num_total_anchors)
- """
- box1 = box1.unsqueeze(2) # [N, M1, 4] -> [N, M1, 1, 4]
- box2 = box2.unsqueeze(1) # [N, M2, 4] -> [N, 1, M2, 4]
- px1y1, px2y2 = box1[:, :, :, 0:2], box1[:, :, :, 2:4]
- gx1y1, gx2y2 = box2[:, :, :, 0:2], box2[:, :, :, 2:4]
- x1y1 = torch.maximum(px1y1, gx1y1)
- x2y2 = torch.minimum(px2y2, gx2y2)
- overlap = (x2y2 - x1y1).clip(0).prod(-1)
- area1 = (px2y2 - px1y1).clip(0).prod(-1)
- area2 = (gx2y2 - gx1y1).clip(0).prod(-1)
- union = area1 + area2 - overlap + eps
-
- return overlap / union
diff --git a/cv/detection/yolov6/pytorch/yolov6/assigners/atss_assigner.py b/cv/detection/yolov6/pytorch/yolov6/assigners/atss_assigner.py
deleted file mode 100644
index 12a5f243bd3b5e2cb524fec125ea4240b1320fb9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/assigners/atss_assigner.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from yolov6.assigners.iou2d_calculator import iou2d_calculator
-from yolov6.assigners.assigner_utils import dist_calculator, select_candidates_in_gts, select_highest_overlaps, iou_calculator
-
-class ATSSAssigner(nn.Module):
- '''Adaptive Training Sample Selection Assigner'''
- def __init__(self,
- topk=9,
- num_classes=80):
- super(ATSSAssigner, self).__init__()
- self.topk = topk
- self.num_classes = num_classes
- self.bg_idx = num_classes
-
- @torch.no_grad()
- def forward(self,
- anc_bboxes,
- n_level_bboxes,
- gt_labels,
- gt_bboxes,
- mask_gt,
- pd_bboxes):
- r"""This code is based on
- https://github.com/fcjian/TOOD/blob/master/mmdet/core/bbox/assigners/atss_assigner.py
-
- Args:
- anc_bboxes (Tensor): shape(num_total_anchors, 4)
- n_level_bboxes (List):len(3)
- gt_labels (Tensor): shape(bs, n_max_boxes, 1)
- gt_bboxes (Tensor): shape(bs, n_max_boxes, 4)
- mask_gt (Tensor): shape(bs, n_max_boxes, 1)
- pd_bboxes (Tensor): shape(bs, n_max_boxes, 4)
- Returns:
- target_labels (Tensor): shape(bs, num_total_anchors)
- target_bboxes (Tensor): shape(bs, num_total_anchors, 4)
- target_scores (Tensor): shape(bs, num_total_anchors, num_classes)
- fg_mask (Tensor): shape(bs, num_total_anchors)
- """
- self.n_anchors = anc_bboxes.size(0)
- self.bs = gt_bboxes.size(0)
- self.n_max_boxes = gt_bboxes.size(1)
-
- if self.n_max_boxes == 0:
- device = gt_bboxes.device
- return torch.full( [self.bs, self.n_anchors], self.bg_idx).to(device), \
- torch.zeros([self.bs, self.n_anchors, 4]).to(device), \
- torch.zeros([self.bs, self.n_anchors, self.num_classes]).to(device), \
- torch.zeros([self.bs, self.n_anchors]).to(device)
-
-
- overlaps = iou2d_calculator(gt_bboxes.reshape([-1, 4]), anc_bboxes)
- overlaps = overlaps.reshape([self.bs, -1, self.n_anchors])
-
- distances, ac_points = dist_calculator(gt_bboxes.reshape([-1, 4]), anc_bboxes)
- distances = distances.reshape([self.bs, -1, self.n_anchors])
-
- is_in_candidate, candidate_idxs = self.select_topk_candidates(
- distances, n_level_bboxes, mask_gt)
-
- overlaps_thr_per_gt, iou_candidates = self.thres_calculator(
- is_in_candidate, candidate_idxs, overlaps)
-
- # select candidates iou >= threshold as positive
- is_pos = torch.where(
- iou_candidates > overlaps_thr_per_gt.repeat([1, 1, self.n_anchors]),
- is_in_candidate, torch.zeros_like(is_in_candidate))
-
- is_in_gts = select_candidates_in_gts(ac_points, gt_bboxes)
- mask_pos = is_pos * is_in_gts * mask_gt
-
- target_gt_idx, fg_mask, mask_pos = select_highest_overlaps(
- mask_pos, overlaps, self.n_max_boxes)
-
- # assigned target
- target_labels, target_bboxes, target_scores = self.get_targets(
- gt_labels, gt_bboxes, target_gt_idx, fg_mask)
-
- # soft label with iou
- if pd_bboxes is not None:
- ious = iou_calculator(gt_bboxes, pd_bboxes) * mask_pos
- ious = ious.max(axis=-2)[0].unsqueeze(-1)
- target_scores *= ious
-
- return target_labels.long(), target_bboxes, target_scores, fg_mask.bool()
-
- def select_topk_candidates(self,
- distances,
- n_level_bboxes,
- mask_gt):
-
- mask_gt = mask_gt.repeat(1, 1, self.topk).bool()
- level_distances = torch.split(distances, n_level_bboxes, dim=-1)
- is_in_candidate_list = []
- candidate_idxs = []
- start_idx = 0
- for per_level_distances, per_level_boxes in zip(level_distances, n_level_bboxes):
-
- end_idx = start_idx + per_level_boxes
- selected_k = min(self.topk, per_level_boxes)
- _, per_level_topk_idxs = per_level_distances.topk(selected_k, dim=-1, largest=False)
- candidate_idxs.append(per_level_topk_idxs + start_idx)
- per_level_topk_idxs = torch.where(mask_gt,
- per_level_topk_idxs, torch.zeros_like(per_level_topk_idxs))
- is_in_candidate = F.one_hot(per_level_topk_idxs, per_level_boxes).sum(dim=-2)
- is_in_candidate = torch.where(is_in_candidate > 1,
- torch.zeros_like(is_in_candidate), is_in_candidate)
- is_in_candidate_list.append(is_in_candidate.to(distances.dtype))
- start_idx = end_idx
-
- is_in_candidate_list = torch.cat(is_in_candidate_list, dim=-1)
- candidate_idxs = torch.cat(candidate_idxs, dim=-1)
-
- return is_in_candidate_list, candidate_idxs
-
- def thres_calculator(self,
- is_in_candidate,
- candidate_idxs,
- overlaps):
-
- n_bs_max_boxes = self.bs * self.n_max_boxes
- _candidate_overlaps = torch.where(is_in_candidate > 0,
- overlaps, torch.zeros_like(overlaps))
- candidate_idxs = candidate_idxs.reshape([n_bs_max_boxes, -1])
- assist_idxs = self.n_anchors * torch.arange(n_bs_max_boxes, device=candidate_idxs.device)
- assist_idxs = assist_idxs[:,None]
- faltten_idxs = candidate_idxs + assist_idxs
- candidate_overlaps = _candidate_overlaps.reshape(-1)[faltten_idxs]
- candidate_overlaps = candidate_overlaps.reshape([self.bs, self.n_max_boxes, -1])
-
- overlaps_mean_per_gt = candidate_overlaps.mean(axis=-1, keepdim=True)
- overlaps_std_per_gt = candidate_overlaps.std(axis=-1, keepdim=True)
- overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt
-
- return overlaps_thr_per_gt, _candidate_overlaps
-
- def get_targets(self,
- gt_labels,
- gt_bboxes,
- target_gt_idx,
- fg_mask):
-
- # assigned target labels
- batch_idx = torch.arange(self.bs, dtype=gt_labels.dtype, device=gt_labels.device)
- batch_idx = batch_idx[...,None]
- target_gt_idx = (target_gt_idx + batch_idx * self.n_max_boxes).long()
- target_labels = gt_labels.flatten()[target_gt_idx.flatten()]
- target_labels = target_labels.reshape([self.bs, self.n_anchors])
- target_labels = torch.where(fg_mask > 0,
- target_labels, torch.full_like(target_labels, self.bg_idx))
-
- # assigned target boxes
- target_bboxes = gt_bboxes.reshape([-1, 4])[target_gt_idx.flatten()]
- target_bboxes = target_bboxes.reshape([self.bs, self.n_anchors, 4])
-
- # assigned target scores
- target_scores = F.one_hot(target_labels.long(), self.num_classes + 1).float()
- target_scores = target_scores[:, :, :self.num_classes]
-
- return target_labels, target_bboxes, target_scores
diff --git a/cv/detection/yolov6/pytorch/yolov6/assigners/iou2d_calculator.py b/cv/detection/yolov6/pytorch/yolov6/assigners/iou2d_calculator.py
deleted file mode 100644
index 63768015b87d5d48a309103831703871b3647658..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/assigners/iou2d_calculator.py
+++ /dev/null
@@ -1,249 +0,0 @@
-#This code is based on
-#https://github.com/fcjian/TOOD/blob/master/mmdet/core/bbox/iou_calculators/iou2d_calculator.py
-
-import torch
-
-
-def cast_tensor_type(x, scale=1., dtype=None):
- if dtype == 'fp16':
- # scale is for preventing overflows
- x = (x / scale).half()
- return x
-
-
-def fp16_clamp(x, min=None, max=None):
- if not x.is_cuda and x.dtype == torch.float16:
- # clamp for cpu float16, tensor fp16 has no clamp implementation
- return x.float().clamp(min, max).half()
-
- return x.clamp(min, max)
-
-
-def iou2d_calculator(bboxes1, bboxes2, mode='iou', is_aligned=False, scale=1., dtype=None):
- """2D Overlaps (e.g. IoUs, GIoUs) Calculator."""
-
- """Calculate IoU between 2D bboxes.
-
- Args:
- bboxes1 (Tensor): bboxes have shape (m, 4) in
- format, or shape (m, 5) in format.
- bboxes2 (Tensor): bboxes have shape (m, 4) in
- format, shape (m, 5) in format, or be
- empty. If ``is_aligned `` is ``True``, then m and n must be
- equal.
- mode (str): "iou" (intersection over union), "iof" (intersection
- over foreground), or "giou" (generalized intersection over
- union).
- is_aligned (bool, optional): If True, then m and n must be equal.
- Default False.
-
- Returns:
- Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,)
- """
- assert bboxes1.size(-1) in [0, 4, 5]
- assert bboxes2.size(-1) in [0, 4, 5]
- if bboxes2.size(-1) == 5:
- bboxes2 = bboxes2[..., :4]
- if bboxes1.size(-1) == 5:
- bboxes1 = bboxes1[..., :4]
-
- if dtype == 'fp16':
- # change tensor type to save cpu and cuda memory and keep speed
- bboxes1 = cast_tensor_type(bboxes1, scale, dtype)
- bboxes2 = cast_tensor_type(bboxes2, scale, dtype)
- overlaps = bbox_overlaps(bboxes1, bboxes2, mode, is_aligned)
- if not overlaps.is_cuda and overlaps.dtype == torch.float16:
- # resume cpu float32
- overlaps = overlaps.float()
- return overlaps
-
- return bbox_overlaps(bboxes1, bboxes2, mode, is_aligned)
-
-
-def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6):
- """Calculate overlap between two set of bboxes.
-
- FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889
- Note:
- Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou',
- there are some new generated variable when calculating IOU
- using bbox_overlaps function:
-
- 1) is_aligned is False
- area1: M x 1
- area2: N x 1
- lt: M x N x 2
- rb: M x N x 2
- wh: M x N x 2
- overlap: M x N x 1
- union: M x N x 1
- ious: M x N x 1
-
- Total memory:
- S = (9 x N x M + N + M) * 4 Byte,
-
- When using FP16, we can reduce:
- R = (9 x N x M + N + M) * 4 / 2 Byte
- R large than (N + M) * 4 * 2 is always true when N and M >= 1.
- Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2,
- N + 1 < 3 * N, when N or M is 1.
-
- Given M = 40 (ground truth), N = 400000 (three anchor boxes
- in per grid, FPN, R-CNNs),
- R = 275 MB (one times)
-
- A special case (dense detection), M = 512 (ground truth),
- R = 3516 MB = 3.43 GB
-
- When the batch size is B, reduce:
- B x R
-
- Therefore, CUDA memory runs out frequently.
-
- Experiments on GeForce RTX 2080Ti (11019 MiB):
-
- | dtype | M | N | Use | Real | Ideal |
- |:----:|:----:|:----:|:----:|:----:|:----:|
- | FP32 | 512 | 400000 | 8020 MiB | -- | -- |
- | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB |
- | FP32 | 40 | 400000 | 1540 MiB | -- | -- |
- | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB |
-
- 2) is_aligned is True
- area1: N x 1
- area2: N x 1
- lt: N x 2
- rb: N x 2
- wh: N x 2
- overlap: N x 1
- union: N x 1
- ious: N x 1
-
- Total memory:
- S = 11 x N * 4 Byte
-
- When using FP16, we can reduce:
- R = 11 x N * 4 / 2 Byte
-
- So do the 'giou' (large than 'iou').
-
- Time-wise, FP16 is generally faster than FP32.
-
- When gpu_assign_thr is not -1, it takes more time on cpu
- but not reduce memory.
- There, we can reduce half the memory and keep the speed.
-
- If ``is_aligned`` is ``False``, then calculate the overlaps between each
- bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned
- pair of bboxes1 and bboxes2.
-
- Args:
- bboxes1 (Tensor): shape (B, m, 4) in format or empty.
- bboxes2 (Tensor): shape (B, n, 4) in format or empty.
- B indicates the batch dim, in shape (B1, B2, ..., Bn).
- If ``is_aligned`` is ``True``, then m and n must be equal.
- mode (str): "iou" (intersection over union), "iof" (intersection over
- foreground) or "giou" (generalized intersection over union).
- Default "iou".
- is_aligned (bool, optional): If True, then m and n must be equal.
- Default False.
- eps (float, optional): A value added to the denominator for numerical
- stability. Default 1e-6.
-
- Returns:
- Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,)
-
- Example:
- >>> bboxes1 = torch.FloatTensor([
- >>> [0, 0, 10, 10],
- >>> [10, 10, 20, 20],
- >>> [32, 32, 38, 42],
- >>> ])
- >>> bboxes2 = torch.FloatTensor([
- >>> [0, 0, 10, 20],
- >>> [0, 10, 10, 19],
- >>> [10, 10, 20, 20],
- >>> ])
- >>> overlaps = bbox_overlaps(bboxes1, bboxes2)
- >>> assert overlaps.shape == (3, 3)
- >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True)
- >>> assert overlaps.shape == (3, )
-
- Example:
- >>> empty = torch.empty(0, 4)
- >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]])
- >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1)
- >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0)
- >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
- """
-
- assert mode in ['iou', 'iof', 'giou'], f'Unsupported mode {mode}'
- # Either the boxes are empty or the length of boxes' last dimension is 4
- assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0)
- assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0)
-
- # Batch dim must be the same
- # Batch dim: (B1, B2, ... Bn)
- assert bboxes1.shape[:-2] == bboxes2.shape[:-2]
- batch_shape = bboxes1.shape[:-2]
-
- rows = bboxes1.size(-2)
- cols = bboxes2.size(-2)
- if is_aligned:
- assert rows == cols
-
- if rows * cols == 0:
- if is_aligned:
- return bboxes1.new(batch_shape + (rows, ))
- else:
- return bboxes1.new(batch_shape + (rows, cols))
-
- area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (
- bboxes1[..., 3] - bboxes1[..., 1])
- area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (
- bboxes2[..., 3] - bboxes2[..., 1])
-
- if is_aligned:
- lt = torch.max(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2]
- rb = torch.min(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2]
-
- wh = fp16_clamp(rb - lt, min=0)
- overlap = wh[..., 0] * wh[..., 1]
-
- if mode in ['iou', 'giou']:
- union = area1 + area2 - overlap
- else:
- union = area1
- if mode == 'giou':
- enclosed_lt = torch.min(bboxes1[..., :2], bboxes2[..., :2])
- enclosed_rb = torch.max(bboxes1[..., 2:], bboxes2[..., 2:])
- else:
- lt = torch.max(bboxes1[..., :, None, :2],
- bboxes2[..., None, :, :2]) # [B, rows, cols, 2]
- rb = torch.min(bboxes1[..., :, None, 2:],
- bboxes2[..., None, :, 2:]) # [B, rows, cols, 2]
-
- wh = fp16_clamp(rb - lt, min=0)
- overlap = wh[..., 0] * wh[..., 1]
-
- if mode in ['iou', 'giou']:
- union = area1[..., None] + area2[..., None, :] - overlap
- else:
- union = area1[..., None]
- if mode == 'giou':
- enclosed_lt = torch.min(bboxes1[..., :, None, :2],
- bboxes2[..., None, :, :2])
- enclosed_rb = torch.max(bboxes1[..., :, None, 2:],
- bboxes2[..., None, :, 2:])
-
- eps = union.new_tensor([eps])
- union = torch.max(union, eps)
- ious = overlap / union
- if mode in ['iou', 'iof']:
- return ious
- # calculate gious
- enclose_wh = fp16_clamp(enclosed_rb - enclosed_lt, min=0)
- enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1]
- enclose_area = torch.max(enclose_area, eps)
- gious = ious - (enclose_area - union) / enclose_area
- return gious
diff --git a/cv/detection/yolov6/pytorch/yolov6/assigners/tal_assigner.py b/cv/detection/yolov6/pytorch/yolov6/assigners/tal_assigner.py
deleted file mode 100644
index 45008f5acb48d5e1d0d077e19ca3c87e713a6779..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/assigners/tal_assigner.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from yolov6.assigners.assigner_utils import select_candidates_in_gts, select_highest_overlaps, iou_calculator, dist_calculator
-
-class TaskAlignedAssigner(nn.Module):
- def __init__(self,
- topk=13,
- num_classes=80,
- alpha=1.0,
- beta=6.0,
- eps=1e-9):
- super(TaskAlignedAssigner, self).__init__()
- self.topk = topk
- self.num_classes = num_classes
- self.bg_idx = num_classes
- self.alpha = alpha
- self.beta = beta
- self.eps = eps
-
- @torch.no_grad()
- def forward(self,
- pd_scores,
- pd_bboxes,
- anc_points,
- gt_labels,
- gt_bboxes,
- mask_gt):
- """This code referenced to
- https://github.com/Nioolek/PPYOLOE_pytorch/blob/master/ppyoloe/assigner/tal_assigner.py
-
- Args:
- pd_scores (Tensor): shape(bs, num_total_anchors, num_classes)
- pd_bboxes (Tensor): shape(bs, num_total_anchors, 4)
- anc_points (Tensor): shape(num_total_anchors, 2)
- gt_labels (Tensor): shape(bs, n_max_boxes, 1)
- gt_bboxes (Tensor): shape(bs, n_max_boxes, 4)
- mask_gt (Tensor): shape(bs, n_max_boxes, 1)
- Returns:
- target_labels (Tensor): shape(bs, num_total_anchors)
- target_bboxes (Tensor): shape(bs, num_total_anchors, 4)
- target_scores (Tensor): shape(bs, num_total_anchors, num_classes)
- fg_mask (Tensor): shape(bs, num_total_anchors)
- """
- self.bs = pd_scores.size(0)
- self.n_max_boxes = gt_bboxes.size(1)
-
- if self.n_max_boxes == 0:
- device = gt_bboxes.device
- return torch.full_like(pd_scores[..., 0], self.bg_idx).to(device), \
- torch.zeros_like(pd_bboxes).to(device), \
- torch.zeros_like(pd_scores).to(device), \
- torch.zeros_like(pd_scores[..., 0]).to(device)
-
- cycle, step, self.bs = (1, self.bs, self.bs) if self.n_max_boxes <= 100 else (self.bs, 1, 1)
- target_labels_lst, target_bboxes_lst, target_scores_lst, fg_mask_lst = [], [], [], []
- # loop batch dim in case of numerous object box
- for i in range(cycle):
- start, end = i*step, (i+1)*step
- pd_scores_ = pd_scores[start:end, ...]
- pd_bboxes_ = pd_bboxes[start:end, ...]
- gt_labels_ = gt_labels[start:end, ...]
- gt_bboxes_ = gt_bboxes[start:end, ...]
- mask_gt_ = mask_gt[start:end, ...]
-
- mask_pos, align_metric, overlaps = self.get_pos_mask(
- pd_scores_, pd_bboxes_, gt_labels_, gt_bboxes_, anc_points, mask_gt_)
-
- target_gt_idx, fg_mask, mask_pos = select_highest_overlaps(
- mask_pos, overlaps, self.n_max_boxes)
-
- # assigned target
- target_labels, target_bboxes, target_scores = self.get_targets(
- gt_labels_, gt_bboxes_, target_gt_idx, fg_mask)
-
- # normalize
- align_metric *= mask_pos
- pos_align_metrics = align_metric.max(axis=-1, keepdim=True)[0]
- pos_overlaps = (overlaps * mask_pos).max(axis=-1, keepdim=True)[0]
- norm_align_metric = (align_metric * pos_overlaps / (pos_align_metrics + self.eps)).max(-2)[0].unsqueeze(-1)
- target_scores = target_scores * norm_align_metric
-
- # append
- target_labels_lst.append(target_labels)
- target_bboxes_lst.append(target_bboxes)
- target_scores_lst.append(target_scores)
- fg_mask_lst.append(fg_mask)
-
- # concat
- target_labels = torch.cat(target_labels_lst, 0)
- target_bboxes = torch.cat(target_bboxes_lst, 0)
- target_scores = torch.cat(target_scores_lst, 0)
- fg_mask = torch.cat(fg_mask_lst, 0)
-
- return target_labels, target_bboxes, target_scores, fg_mask.bool()
-
- def get_pos_mask(self,
- pd_scores,
- pd_bboxes,
- gt_labels,
- gt_bboxes,
- anc_points,
- mask_gt):
-
- # get anchor_align metric
- align_metric, overlaps = self.get_box_metrics(pd_scores, pd_bboxes, gt_labels, gt_bboxes)
- # get in_gts mask
- mask_in_gts = select_candidates_in_gts(anc_points, gt_bboxes)
- # get topk_metric mask
- mask_topk = self.select_topk_candidates(
- align_metric * mask_in_gts, topk_mask=mask_gt.repeat([1, 1, self.topk]).bool())
- # merge all mask to a final mask
- mask_pos = mask_topk * mask_in_gts * mask_gt
-
- return mask_pos, align_metric, overlaps
-
- def get_box_metrics(self,
- pd_scores,
- pd_bboxes,
- gt_labels,
- gt_bboxes):
-
- pd_scores = pd_scores.permute(0, 2, 1)
- gt_labels = gt_labels.to(torch.long)
- ind = torch.zeros([2, self.bs, self.n_max_boxes], dtype=torch.long)
- ind[0] = torch.arange(end=self.bs).view(-1, 1).repeat(1, self.n_max_boxes)
- ind[1] = gt_labels.squeeze(-1)
- bbox_scores = pd_scores[ind[0], ind[1]]
-
- overlaps = iou_calculator(gt_bboxes, pd_bboxes)
- align_metric = bbox_scores.pow(self.alpha) * overlaps.pow(self.beta)
-
- return align_metric, overlaps
-
- def select_topk_candidates(self,
- metrics,
- largest=True,
- topk_mask=None):
-
- num_anchors = metrics.shape[-1]
- topk_metrics, topk_idxs = torch.topk(
- metrics, self.topk, axis=-1, largest=largest)
- if topk_mask is None:
- topk_mask = (topk_metrics.max(axis=-1, keepdim=True) > self.eps).tile(
- [1, 1, self.topk])
- topk_idxs = torch.where(topk_mask, topk_idxs, torch.zeros_like(topk_idxs))
- is_in_topk = F.one_hot(topk_idxs, num_anchors).sum(axis=-2)
- is_in_topk = torch.where(is_in_topk > 1,
- torch.zeros_like(is_in_topk), is_in_topk)
- return is_in_topk.to(metrics.dtype)
-
- def get_targets(self,
- gt_labels,
- gt_bboxes,
- target_gt_idx,
- fg_mask):
-
- # assigned target labels
- batch_ind = torch.arange(end=self.bs, dtype=torch.int64, device=gt_labels.device)[...,None]
- target_gt_idx = target_gt_idx + batch_ind * self.n_max_boxes
- target_labels = gt_labels.long().flatten()[target_gt_idx]
-
- # assigned target boxes
- target_bboxes = gt_bboxes.reshape([-1, 4])[target_gt_idx]
-
- # assigned target scores
- target_labels[target_labels<0] = 0
- target_scores = F.one_hot(target_labels, self.num_classes)
- fg_scores_mask = fg_mask[:, :, None].repeat(1, 1, self.num_classes)
- target_scores = torch.where(fg_scores_mask > 0, target_scores,
- torch.full_like(target_scores, 0))
-
- return target_labels, target_bboxes, target_scores
diff --git a/cv/detection/yolov6/pytorch/yolov6/core/engine.py b/cv/detection/yolov6/pytorch/yolov6/core/engine.py
deleted file mode 100644
index 1054513529d7e406d2a4b566c4814d805806fc07..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/core/engine.py
+++ /dev/null
@@ -1,591 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-from ast import Pass
-import os
-import time
-from copy import deepcopy
-import os.path as osp
-
-from tqdm import tqdm
-
-import cv2
-import numpy as np
-import math
-import torch
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.tensorboard import SummaryWriter
-
-import tools.eval as eval
-from yolov6.data.data_load import create_dataloader
-from yolov6.models.yolo import build_model
-from yolov6.models.yolo_lite import build_model as build_lite_model
-
-from yolov6.models.losses.loss import ComputeLoss as ComputeLoss
-from yolov6.models.losses.loss_fuseab import ComputeLoss as ComputeLoss_ab
-from yolov6.models.losses.loss_distill import ComputeLoss as ComputeLoss_distill
-from yolov6.models.losses.loss_distill_ns import ComputeLoss as ComputeLoss_distill_ns
-
-from yolov6.utils.events import LOGGER, NCOLS, load_yaml, write_tblog, write_tbimg
-from yolov6.utils.ema import ModelEMA, de_parallel
-from yolov6.utils.checkpoint import load_state_dict, save_checkpoint, strip_optimizer
-from yolov6.solver.build import build_optimizer, build_lr_scheduler
-from yolov6.utils.RepOptimizer import extract_scales, RepVGGOptimizer
-from yolov6.utils.nms import xywh2xyxy
-from yolov6.utils.general import download_ckpt
-
-
-class Trainer:
- def __init__(self, args, cfg, device):
- self.args = args
- self.cfg = cfg
- self.device = device
- self.max_epoch = args.epochs
-
- if args.resume:
- self.ckpt = torch.load(args.resume, map_location='cpu')
-
- self.rank = args.rank
- self.local_rank = args.local_rank
- self.world_size = args.world_size
- self.main_process = self.rank in [-1, 0]
- self.save_dir = args.save_dir
- # get data loader
- self.data_dict = load_yaml(args.data_path)
- self.num_classes = self.data_dict['nc']
- # get model and optimizer
- self.distill_ns = True if self.args.distill and self.cfg.model.type in ['YOLOv6n','YOLOv6s'] else False
- model = self.get_model(args, cfg, self.num_classes, device)
- if self.args.distill:
- if self.args.fuse_ab:
- LOGGER.error('ERROR in: Distill models should turn off the fuse_ab.\n')
- exit()
- self.teacher_model = self.get_teacher_model(args, cfg, self.num_classes, device)
- if self.args.quant:
- self.quant_setup(model, cfg, device)
- if cfg.training_mode == 'repopt':
- scales = self.load_scale_from_pretrained_models(cfg, device)
- reinit = False if cfg.model.pretrained is not None else True
- self.optimizer = RepVGGOptimizer(model, scales, args, cfg, reinit=reinit)
- else:
- self.optimizer = self.get_optimizer(args, cfg, model)
- self.scheduler, self.lf = self.get_lr_scheduler(args, cfg, self.optimizer)
- self.ema = ModelEMA(model) if self.main_process else None
- # tensorboard
- self.tblogger = SummaryWriter(self.save_dir) if self.main_process else None
- self.start_epoch = 0
- #resume
- if hasattr(self, "ckpt"):
- resume_state_dict = self.ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
- model.load_state_dict(resume_state_dict, strict=True) # load
- self.start_epoch = self.ckpt['epoch'] + 1
- self.optimizer.load_state_dict(self.ckpt['optimizer'])
- self.scheduler.load_state_dict(self.ckpt['scheduler'])
- if self.main_process:
- self.ema.ema.load_state_dict(self.ckpt['ema'].float().state_dict())
- self.ema.updates = self.ckpt['updates']
- if self.start_epoch > (self.max_epoch - self.args.stop_aug_last_n_epoch):
- self.cfg.data_aug.mosaic = 0.0
- self.cfg.data_aug.mixup = 0.0
-
- self.train_loader, self.val_loader = self.get_data_loader(self.args, self.cfg, self.data_dict)
-
- self.model = self.parallel_model(args, model, device)
- self.model.nc, self.model.names = self.data_dict['nc'], self.data_dict['names']
-
- self.max_stepnum = len(self.train_loader)
- self.batch_size = args.batch_size
- self.img_size = args.img_size
- self.rect = args.rect
- self.vis_imgs_list = []
- self.write_trainbatch_tb = args.write_trainbatch_tb
- # set color for classnames
- self.color = [tuple(np.random.choice(range(256), size=3)) for _ in range(self.model.nc)]
- self.specific_shape = args.specific_shape
- self.height = args.height
- self.width = args.width
-
- self.loss_num = 3
- self.loss_info = ['Epoch', 'lr', 'iou_loss', 'dfl_loss', 'cls_loss']
- if self.args.distill:
- self.loss_num += 1
- self.loss_info += ['cwd_loss']
-
-
- # Training Process
- def train(self):
- try:
- self.before_train_loop()
- for self.epoch in range(self.start_epoch, self.max_epoch):
- self.before_epoch()
- self.train_one_epoch(self.epoch)
- self.after_epoch()
- self.strip_model()
-
- except Exception as _:
- LOGGER.error('ERROR in training loop or eval/save model.')
- raise
- finally:
- self.train_after_loop()
-
- # Training loop for each epoch
- def train_one_epoch(self, epoch_num):
- try:
- for self.step, self.batch_data in self.pbar:
- self.train_in_steps(epoch_num, self.step)
- self.print_details()
- except Exception as _:
- LOGGER.error('ERROR in training steps.')
- raise
-
- # Training one batch data.
- def train_in_steps(self, epoch_num, step_num):
- images, targets = self.prepro_data(self.batch_data, self.device)
- # plot train_batch and save to tensorboard once an epoch
- if self.write_trainbatch_tb and self.main_process and self.step == 0:
- self.plot_train_batch(images, targets)
- write_tbimg(self.tblogger, self.vis_train_batch, self.step + self.max_stepnum * self.epoch, type='train')
-
- # forward
- with amp.autocast(enabled=self.device != 'cpu'):
- _, _, batch_height, batch_width = images.shape
- preds, s_featmaps = self.model(images)
- if self.args.distill:
- with torch.no_grad():
- t_preds, t_featmaps = self.teacher_model(images)
- temperature = self.args.temperature
- total_loss, loss_items = self.compute_loss_distill(preds, t_preds, s_featmaps, t_featmaps, targets, \
- epoch_num, self.max_epoch, temperature, step_num,
- batch_height, batch_width)
-
- elif self.args.fuse_ab:
- total_loss, loss_items = self.compute_loss((preds[0],preds[3],preds[4]), targets, epoch_num,
- step_num, batch_height, batch_width) # YOLOv6_af
- total_loss_ab, loss_items_ab = self.compute_loss_ab(preds[:3], targets, epoch_num, step_num,
- batch_height, batch_width) # YOLOv6_ab
- total_loss += total_loss_ab
- loss_items += loss_items_ab
- else:
- total_loss, loss_items = self.compute_loss(preds, targets, epoch_num, step_num,
- batch_height, batch_width) # YOLOv6_af
- if self.rank != -1:
- total_loss *= self.world_size
- # backward
- self.scaler.scale(total_loss).backward()
- self.loss_items = loss_items
- self.update_optimizer()
-
- def after_epoch(self):
- lrs_of_this_epoch = [x['lr'] for x in self.optimizer.param_groups]
- self.scheduler.step() # update lr
- if self.main_process:
- self.ema.update_attr(self.model, include=['nc', 'names', 'stride']) # update attributes for ema model
-
- remaining_epochs = self.max_epoch - 1 - self.epoch # self.epoch is start from 0
- eval_interval = self.args.eval_interval if remaining_epochs >= self.args.heavy_eval_range else min(3, self.args.eval_interval)
- is_val_epoch = (remaining_epochs == 0) or ((not self.args.eval_final_only) and ((self.epoch + 1) % eval_interval == 0))
- if is_val_epoch:
- self.eval_model()
- self.ap = self.evaluate_results[1]
- self.best_ap = max(self.ap, self.best_ap)
- # save ckpt
- ckpt = {
- 'model': deepcopy(de_parallel(self.model)).half(),
- 'ema': deepcopy(self.ema.ema).half(),
- 'updates': self.ema.updates,
- 'optimizer': self.optimizer.state_dict(),
- 'scheduler': self.scheduler.state_dict(),
- 'epoch': self.epoch,
- 'results': self.evaluate_results,
- }
-
- save_ckpt_dir = osp.join(self.save_dir, 'weights')
- save_checkpoint(ckpt, (is_val_epoch) and (self.ap == self.best_ap), save_ckpt_dir, model_name='last_ckpt')
- if self.epoch >= self.max_epoch - self.args.save_ckpt_on_last_n_epoch:
- save_checkpoint(ckpt, False, save_ckpt_dir, model_name=f'{self.epoch}_ckpt')
-
- #default save best ap ckpt in stop strong aug epochs
- if self.epoch >= self.max_epoch - self.args.stop_aug_last_n_epoch:
- if self.best_stop_strong_aug_ap < self.ap:
- self.best_stop_strong_aug_ap = max(self.ap, self.best_stop_strong_aug_ap)
- save_checkpoint(ckpt, False, save_ckpt_dir, model_name='best_stop_aug_ckpt')
-
- del ckpt
-
- self.evaluate_results = list(self.evaluate_results)
-
- # log for tensorboard
- write_tblog(self.tblogger, self.epoch, self.evaluate_results, lrs_of_this_epoch, self.mean_loss)
- # save validation predictions to tensorboard
- write_tbimg(self.tblogger, self.vis_imgs_list, self.epoch, type='val')
-
- def eval_model(self):
- if not hasattr(self.cfg, "eval_params"):
- results, vis_outputs, vis_paths = eval.run(self.data_dict,
- batch_size=self.batch_size // self.world_size * 2,
- img_size=self.img_size,
- model=self.ema.ema if self.args.calib is False else self.model,
- conf_thres=0.03,
- dataloader=self.val_loader,
- save_dir=self.save_dir,
- task='train',
- specific_shape=self.specific_shape,
- height=self.height,
- width=self.width
- )
- else:
- def get_cfg_value(cfg_dict, value_str, default_value):
- if value_str in cfg_dict:
- if isinstance(cfg_dict[value_str], list):
- return cfg_dict[value_str][0] if cfg_dict[value_str][0] is not None else default_value
- else:
- return cfg_dict[value_str] if cfg_dict[value_str] is not None else default_value
- else:
- return default_value
- eval_img_size = get_cfg_value(self.cfg.eval_params, "img_size", self.img_size)
- results, vis_outputs, vis_paths = eval.run(self.data_dict,
- batch_size=get_cfg_value(self.cfg.eval_params, "batch_size", self.batch_size // self.world_size * 2),
- img_size=eval_img_size,
- model=self.ema.ema if self.args.calib is False else self.model,
- conf_thres=get_cfg_value(self.cfg.eval_params, "conf_thres", 0.03),
- dataloader=self.val_loader,
- save_dir=self.save_dir,
- task='train',
- shrink_size=get_cfg_value(self.cfg.eval_params, "shrink_size", eval_img_size),
- infer_on_rect=get_cfg_value(self.cfg.eval_params, "infer_on_rect", False),
- verbose=get_cfg_value(self.cfg.eval_params, "verbose", False),
- do_coco_metric=get_cfg_value(self.cfg.eval_params, "do_coco_metric", True),
- do_pr_metric=get_cfg_value(self.cfg.eval_params, "do_pr_metric", False),
- plot_curve=get_cfg_value(self.cfg.eval_params, "plot_curve", False),
- plot_confusion_matrix=get_cfg_value(self.cfg.eval_params, "plot_confusion_matrix", False),
- specific_shape=self.specific_shape,
- height=self.height,
- width=self.width
- )
-
- LOGGER.info(f"Epoch: {self.epoch} | mAP@0.5: {results[0]} | mAP@0.50:0.95: {results[1]}")
- self.evaluate_results = results[:2]
- # plot validation predictions
- self.plot_val_pred(vis_outputs, vis_paths)
-
-
- def before_train_loop(self):
- LOGGER.info('Training start...')
- self.start_time = time.time()
- self.warmup_stepnum = max(round(self.cfg.solver.warmup_epochs * self.max_stepnum), 1000) if self.args.quant is False else 0
- self.scheduler.last_epoch = self.start_epoch - 1
- self.last_opt_step = -1
- self.scaler = amp.GradScaler(enabled=self.device != 'cpu')
-
- self.best_ap, self.ap = 0.0, 0.0
- self.best_stop_strong_aug_ap = 0.0
- self.evaluate_results = (0, 0) # AP50, AP50_95
- # resume results
- if hasattr(self, "ckpt"):
- self.evaluate_results = self.ckpt['results']
- self.best_ap = self.evaluate_results[1]
- self.best_stop_strong_aug_ap = self.evaluate_results[1]
-
-
- self.compute_loss = ComputeLoss(num_classes=self.data_dict['nc'],
- ori_img_size=self.img_size,
- warmup_epoch=self.cfg.model.head.atss_warmup_epoch,
- use_dfl=self.cfg.model.head.use_dfl,
- reg_max=self.cfg.model.head.reg_max,
- iou_type=self.cfg.model.head.iou_type,
- fpn_strides=self.cfg.model.head.strides)
-
- if self.args.fuse_ab:
- self.compute_loss_ab = ComputeLoss_ab(num_classes=self.data_dict['nc'],
- ori_img_size=self.img_size,
- warmup_epoch=0,
- use_dfl=False,
- reg_max=0,
- iou_type=self.cfg.model.head.iou_type,
- fpn_strides=self.cfg.model.head.strides,
- )
- if self.args.distill :
- if self.cfg.model.type in ['YOLOv6n','YOLOv6s']:
- Loss_distill_func = ComputeLoss_distill_ns
- else:
- Loss_distill_func = ComputeLoss_distill
-
- self.compute_loss_distill = Loss_distill_func(num_classes=self.data_dict['nc'],
- ori_img_size=self.img_size,
- fpn_strides=self.cfg.model.head.strides,
- warmup_epoch=self.cfg.model.head.atss_warmup_epoch,
- use_dfl=self.cfg.model.head.use_dfl,
- reg_max=self.cfg.model.head.reg_max,
- iou_type=self.cfg.model.head.iou_type,
- distill_weight = self.cfg.model.head.distill_weight,
- distill_feat = self.args.distill_feat,
- )
-
- def before_epoch(self):
- #stop strong aug like mosaic and mixup from last n epoch by recreate dataloader
- if self.epoch == self.max_epoch - self.args.stop_aug_last_n_epoch:
- self.cfg.data_aug.mosaic = 0.0
- self.cfg.data_aug.mixup = 0.0
- self.train_loader, self.val_loader = self.get_data_loader(self.args, self.cfg, self.data_dict)
- self.model.train()
- if self.rank != -1:
- self.train_loader.sampler.set_epoch(self.epoch)
- self.mean_loss = torch.zeros(self.loss_num, device=self.device)
- self.optimizer.zero_grad()
-
- LOGGER.info(('\n' + '%10s' * (self.loss_num + 2)) % (*self.loss_info,))
- self.pbar = enumerate(self.train_loader)
- if self.main_process:
- self.pbar = tqdm(self.pbar, total=self.max_stepnum, ncols=NCOLS, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}')
-
- # Print loss after each steps
- def print_details(self):
- if self.main_process:
- self.mean_loss = (self.mean_loss * self.step + self.loss_items) / (self.step + 1)
- self.pbar.set_description(('%10s' + ' %10.4g' + '%10.4g' * self.loss_num) % (f'{self.epoch}/{self.max_epoch - 1}', \
- self.scheduler.get_last_lr()[0], *(self.mean_loss)))
-
- def strip_model(self):
- if self.main_process:
- LOGGER.info(f'\nTraining completed in {(time.time() - self.start_time) / 3600:.3f} hours.')
- save_ckpt_dir = osp.join(self.save_dir, 'weights')
- strip_optimizer(save_ckpt_dir, self.epoch) # strip optimizers for saved pt model
-
- # Empty cache if training finished
- def train_after_loop(self):
- if self.device != 'cpu':
- torch.cuda.empty_cache()
-
- def update_optimizer(self):
- curr_step = self.step + self.max_stepnum * self.epoch
- self.accumulate = max(1, round(64 / self.batch_size))
- if curr_step <= self.warmup_stepnum:
- self.accumulate = max(1, np.interp(curr_step, [0, self.warmup_stepnum], [1, 64 / self.batch_size]).round())
- for k, param in enumerate(self.optimizer.param_groups):
- warmup_bias_lr = self.cfg.solver.warmup_bias_lr if k == 2 else 0.0
- param['lr'] = np.interp(curr_step, [0, self.warmup_stepnum], [warmup_bias_lr, param['initial_lr'] * self.lf(self.epoch)])
- if 'momentum' in param:
- param['momentum'] = np.interp(curr_step, [0, self.warmup_stepnum], [self.cfg.solver.warmup_momentum, self.cfg.solver.momentum])
- if curr_step - self.last_opt_step >= self.accumulate:
- self.scaler.step(self.optimizer)
- self.scaler.update()
- self.optimizer.zero_grad()
- if self.ema:
- self.ema.update(self.model)
- self.last_opt_step = curr_step
-
- @staticmethod
- def get_data_loader(args, cfg, data_dict):
- train_path, val_path = data_dict['train'], data_dict['val']
- # check data
- nc = int(data_dict['nc'])
- class_names = data_dict['names']
- assert len(class_names) == nc, f'the length of class names does not match the number of classes defined'
- grid_size = max(int(max(cfg.model.head.strides)), 32)
- # create train dataloader
- train_loader = create_dataloader(train_path, args.img_size, args.batch_size // args.world_size, grid_size,
- hyp=dict(cfg.data_aug), augment=True, rect=args.rect, rank=args.local_rank,
- workers=args.workers, shuffle=True, check_images=args.check_images,
- check_labels=args.check_labels, data_dict=data_dict, task='train',
- specific_shape=args.specific_shape, height=args.height, width=args.width)[0]
- # create val dataloader
- val_loader = None
- if args.rank in [-1, 0]:
- # TODO: check whether to set rect to self.rect?
- val_loader = create_dataloader(val_path, args.img_size, args.batch_size // args.world_size * 2, grid_size,
- hyp=dict(cfg.data_aug), rect=True, rank=-1, pad=0.5,
- workers=args.workers, check_images=args.check_images,
- check_labels=args.check_labels, data_dict=data_dict, task='val',
- specific_shape=args.specific_shape, height=args.height, width=args.width)[0]
-
- return train_loader, val_loader
-
- @staticmethod
- def prepro_data(batch_data, device):
- images = batch_data[0].to(device, non_blocking=True).float() / 255
- targets = batch_data[1].to(device)
- return images, targets
-
- def get_model(self, args, cfg, nc, device):
- if 'YOLOv6-lite' in cfg.model.type:
- assert not self.args.fuse_ab, 'ERROR in: YOLOv6-lite models not support fuse_ab mode.'
- assert not self.args.distill, 'ERROR in: YOLOv6-lite models not support distill mode.'
- model = build_lite_model(cfg, nc, device)
- else:
- model = build_model(cfg, nc, device, fuse_ab=self.args.fuse_ab, distill_ns=self.distill_ns)
- weights = cfg.model.pretrained
- if weights: # finetune if pretrained model is set
- if not os.path.exists(weights):
- download_ckpt(weights)
- LOGGER.info(f'Loading state_dict from {weights} for fine-tuning...')
- model = load_state_dict(weights, model, map_location=device)
-
- LOGGER.info('Model: {}'.format(model))
- return model
-
- def get_teacher_model(self, args, cfg, nc, device):
- teacher_fuse_ab = False if cfg.model.head.num_layers != 3 else True
- model = build_model(cfg, nc, device, fuse_ab=teacher_fuse_ab)
- weights = args.teacher_model_path
- if weights: # finetune if pretrained model is set
- LOGGER.info(f'Loading state_dict from {weights} for teacher')
- model = load_state_dict(weights, model, map_location=device)
- LOGGER.info('Model: {}'.format(model))
- # Do not update running means and running vars
- for module in model.modules():
- if isinstance(module, torch.nn.BatchNorm2d):
- module.track_running_stats = False
- return model
-
- @staticmethod
- def load_scale_from_pretrained_models(cfg, device):
- weights = cfg.model.scales
- scales = None
- if not weights:
- LOGGER.error("ERROR: No scales provided to init RepOptimizer!")
- else:
- ckpt = torch.load(weights, map_location=device)
- scales = extract_scales(ckpt)
- return scales
-
-
- @staticmethod
- def parallel_model(args, model, device):
- # If DP mode
- dp_mode = device.type != 'cpu' and args.rank == -1
- if dp_mode and torch.cuda.device_count() > 1:
- LOGGER.warning('WARNING: DP not recommended, use DDP instead.\n')
- model = torch.nn.DataParallel(model)
-
- # If DDP mode
- ddp_mode = device.type != 'cpu' and args.rank != -1
- if ddp_mode:
- model = DDP(model, device_ids=[args.local_rank], output_device=args.local_rank)
-
- return model
-
- def get_optimizer(self, args, cfg, model):
- accumulate = max(1, round(64 / args.batch_size))
- cfg.solver.weight_decay *= args.batch_size * accumulate / 64
- cfg.solver.lr0 *= args.batch_size / (self.world_size * args.bs_per_gpu) # rescale lr0 related to batchsize
- optimizer = build_optimizer(cfg, model)
- return optimizer
-
- @staticmethod
- def get_lr_scheduler(args, cfg, optimizer):
- epochs = args.epochs
- lr_scheduler, lf = build_lr_scheduler(cfg, optimizer, epochs)
- return lr_scheduler, lf
-
- def plot_train_batch(self, images, targets, max_size=1920, max_subplots=16):
- # Plot train_batch with labels
- if isinstance(images, torch.Tensor):
- images = images.cpu().float().numpy()
- if isinstance(targets, torch.Tensor):
- targets = targets.cpu().numpy()
- if np.max(images[0]) <= 1:
- images *= 255 # de-normalise (optional)
- bs, _, h, w = images.shape # batch size, _, height, width
- bs = min(bs, max_subplots) # limit plot images
- ns = np.ceil(bs ** 0.5) # number of subplots (square)
- paths = self.batch_data[2] # image paths
- # Build Image
- mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
- for i, im in enumerate(images):
- if i == max_subplots: # if last batch has fewer images than we expect
- break
- x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
- im = im.transpose(1, 2, 0)
- mosaic[y:y + h, x:x + w, :] = im
- # Resize (optional)
- scale = max_size / ns / max(h, w)
- if scale < 1:
- h = math.ceil(scale * h)
- w = math.ceil(scale * w)
- mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h)))
- for i in range(bs):
- x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
- cv2.rectangle(mosaic, (x, y), (x + w, y + h), (255, 255, 255), thickness=2) # borders
- cv2.putText(mosaic, f"{os.path.basename(paths[i])[:40]}", (x + 5, y + 15),
- cv2.FONT_HERSHEY_COMPLEX, 0.5, color=(220, 220, 220), thickness=1) # filename
- if len(targets) > 0:
- ti = targets[targets[:, 0] == i] # image targets
- boxes = xywh2xyxy(ti[:, 2:6]).T
- classes = ti[:, 1].astype('int')
- labels = ti.shape[1] == 6 # labels if no conf column
- if boxes.shape[1]:
- if boxes.max() <= 1.01: # if normalized with tolerance 0.01
- boxes[[0, 2]] *= w # scale to pixels
- boxes[[1, 3]] *= h
- elif scale < 1: # absolute coords need scale if image scales
- boxes *= scale
- boxes[[0, 2]] += x
- boxes[[1, 3]] += y
- for j, box in enumerate(boxes.T.tolist()):
- box = [int(k) for k in box]
- cls = classes[j]
- color = tuple([int(x) for x in self.color[cls]])
- cls = self.data_dict['names'][cls] if self.data_dict['names'] else cls
- if labels:
- label = f'{cls}'
- cv2.rectangle(mosaic, (box[0], box[1]), (box[2], box[3]), color, thickness=1)
- cv2.putText(mosaic, label, (box[0], box[1] - 5), cv2.FONT_HERSHEY_COMPLEX, 0.5, color, thickness=1)
- self.vis_train_batch = mosaic.copy()
-
- def plot_val_pred(self, vis_outputs, vis_paths, vis_conf=0.3, vis_max_box_num=5):
- # plot validation predictions
- self.vis_imgs_list = []
- for (vis_output, vis_path) in zip(vis_outputs, vis_paths):
- vis_output_array = vis_output.cpu().numpy() # xyxy
- ori_img = cv2.imread(vis_path)
- for bbox_idx, vis_bbox in enumerate(vis_output_array):
- x_tl = int(vis_bbox[0])
- y_tl = int(vis_bbox[1])
- x_br = int(vis_bbox[2])
- y_br = int(vis_bbox[3])
- box_score = vis_bbox[4]
- cls_id = int(vis_bbox[5])
- # draw top n bbox
- if box_score < vis_conf or bbox_idx > vis_max_box_num:
- break
- cv2.rectangle(ori_img, (x_tl, y_tl), (x_br, y_br), tuple([int(x) for x in self.color[cls_id]]), thickness=1)
- cv2.putText(ori_img, f"{self.data_dict['names'][cls_id]}: {box_score:.2f}", (x_tl, y_tl - 10), cv2.FONT_HERSHEY_COMPLEX, 0.5, tuple([int(x) for x in self.color[cls_id]]), thickness=1)
- self.vis_imgs_list.append(torch.from_numpy(ori_img[:, :, ::-1].copy()))
-
-
- # PTQ
- def calibrate(self, cfg):
- def save_calib_model(model, cfg):
- # Save calibrated checkpoint
- output_model_path = os.path.join(cfg.ptq.calib_output_path, '{}_calib_{}.pt'.
- format(os.path.splitext(os.path.basename(cfg.model.pretrained))[0], cfg.ptq.calib_method))
- if cfg.ptq.sensitive_layers_skip is True:
- output_model_path = output_model_path.replace('.pt', '_partial.pt')
- LOGGER.info('Saving calibrated model to {}... '.format(output_model_path))
- if not os.path.exists(cfg.ptq.calib_output_path):
- os.mkdir(cfg.ptq.calib_output_path)
- torch.save({'model': deepcopy(de_parallel(model)).half()}, output_model_path)
- assert self.args.quant is True and self.args.calib is True
- if self.main_process:
- from tools.qat.qat_utils import ptq_calibrate
- ptq_calibrate(self.model, self.train_loader, cfg)
- self.epoch = 0
- self.eval_model()
- save_calib_model(self.model, cfg)
- # QAT
- def quant_setup(self, model, cfg, device):
- if self.args.quant:
- from tools.qat.qat_utils import qat_init_model_manu, skip_sensitive_layers
- qat_init_model_manu(model, cfg, self.args)
- # workaround
- model.neck.upsample_enable_quant(cfg.ptq.num_bits, cfg.ptq.calib_method)
- # if self.main_process:
- # print(model)
- # QAT
- if self.args.calib is False:
- if cfg.qat.sensitive_layers_skip:
- skip_sensitive_layers(model, cfg.qat.sensitive_layers_list)
- # QAT flow load calibrated model
- assert cfg.qat.calib_pt is not None, 'Please provide calibrated model'
- model.load_state_dict(torch.load(cfg.qat.calib_pt)['model'].float().state_dict())
- model.to(device)
diff --git a/cv/detection/yolov6/pytorch/yolov6/core/evaler.py b/cv/detection/yolov6/pytorch/yolov6/core/evaler.py
deleted file mode 100644
index e79f51bea7b5bbba05276fd248e857c708feef93..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/core/evaler.py
+++ /dev/null
@@ -1,545 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-from tqdm import tqdm
-import numpy as np
-import json
-import torch
-import yaml
-from pathlib import Path
-
-from pycocotools.coco import COCO
-from pycocotools.cocoeval import COCOeval
-
-from yolov6.data.data_load import create_dataloader
-from yolov6.utils.events import LOGGER, NCOLS
-from yolov6.utils.nms import non_max_suppression
-from yolov6.utils.general import download_ckpt
-from yolov6.utils.checkpoint import load_checkpoint
-from yolov6.utils.torch_utils import time_sync, get_model_info
-
-
-class Evaler:
- def __init__(self,
- data,
- batch_size=32,
- img_size=640,
- conf_thres=0.03,
- iou_thres=0.65,
- device='',
- half=True,
- save_dir='',
- shrink_size=640,
- infer_on_rect=False,
- verbose=False,
- do_coco_metric=True,
- do_pr_metric=False,
- plot_curve=True,
- plot_confusion_matrix=False,
- specific_shape=False,
- height=640,
- width=640
- ):
- assert do_pr_metric or do_coco_metric, 'ERROR: at least set one val metric'
- self.data = data
- self.batch_size = batch_size
- self.img_size = img_size
- self.conf_thres = conf_thres
- self.iou_thres = iou_thres
- self.device = device
- self.half = half
- self.save_dir = save_dir
- self.shrink_size = shrink_size
- self.infer_on_rect = infer_on_rect
- self.verbose = verbose
- self.do_coco_metric = do_coco_metric
- self.do_pr_metric = do_pr_metric
- self.plot_curve = plot_curve
- self.plot_confusion_matrix = plot_confusion_matrix
- self.specific_shape = specific_shape
- self.height = height
- self.width = width
-
- def init_model(self, model, weights, task):
- if task != 'train':
- if not os.path.exists(weights):
- download_ckpt(weights)
- model = load_checkpoint(weights, map_location=self.device)
- self.stride = int(model.stride.max())
- # switch to deploy
- from yolov6.layers.common import RepVGGBlock
- for layer in model.modules():
- if isinstance(layer, RepVGGBlock):
- layer.switch_to_deploy()
- elif isinstance(layer, torch.nn.Upsample) and not hasattr(layer, 'recompute_scale_factor'):
- layer.recompute_scale_factor = None # torch 1.11.0 compatibility
- LOGGER.info("Switch model to deploy modality.")
- LOGGER.info("Model Summary: {}".format(get_model_info(model, self.img_size)))
- if self.device.type != 'cpu':
- model(torch.zeros(1, 3, self.img_size, self.img_size).to(self.device).type_as(next(model.parameters())))
- model.half() if self.half else model.float()
- return model
-
- def init_data(self, dataloader, task):
- '''Initialize dataloader.
- Returns a dataloader for task val or speed.
- '''
- self.is_coco = self.data.get("is_coco", False)
- self.ids = self.coco80_to_coco91_class() if self.is_coco else list(range(1000))
- if task != 'train':
- eval_hyp = {
- "shrink_size":self.shrink_size,
- }
- rect = self.infer_on_rect
- pad = 0.5 if rect else 0.0
- dataloader = create_dataloader(self.data[task if task in ('train', 'val', 'test') else 'val'],
- self.img_size, self.batch_size, self.stride, hyp=eval_hyp, check_labels=True, pad=pad, rect=rect,
- data_dict=self.data, task=task, specific_shape=self.specific_shape, height=self.height, width=self.width)[0]
- return dataloader
-
- def predict_model(self, model, dataloader, task):
- '''Model prediction
- Predicts the whole dataset and gets the prediced results and inference time.
- '''
- self.speed_result = torch.zeros(4, device=self.device)
- pred_results = []
- pbar = tqdm(dataloader, desc=f"Inferencing model in {task} datasets.", ncols=NCOLS)
-
- # whether to compute metric and plot PR curve and P、R、F1 curve under iou50 match rule
- if self.do_pr_metric:
- stats, ap = [], []
- seen = 0
- iouv = torch.linspace(0.5, 0.95, 10) # iou vector for mAP@0.5:0.95
- niou = iouv.numel()
- if self.plot_confusion_matrix:
- from yolov6.utils.metrics import ConfusionMatrix
- confusion_matrix = ConfusionMatrix(nc=model.nc)
-
- for i, (imgs, targets, paths, shapes) in enumerate(pbar):
- # pre-process
- t1 = time_sync()
- imgs = imgs.to(self.device, non_blocking=True)
- imgs = imgs.half() if self.half else imgs.float()
- imgs /= 255
- self.speed_result[1] += time_sync() - t1 # pre-process time
-
- # Inference
- t2 = time_sync()
- outputs, _ = model(imgs)
- self.speed_result[2] += time_sync() - t2 # inference time
-
- # post-process
- t3 = time_sync()
- outputs = non_max_suppression(outputs, self.conf_thres, self.iou_thres, multi_label=True)
- self.speed_result[3] += time_sync() - t3 # post-process time
- self.speed_result[0] += len(outputs)
-
- if self.do_pr_metric:
- import copy
- eval_outputs = copy.deepcopy([x.detach().cpu() for x in outputs])
-
- # save result
- pred_results.extend(self.convert_to_coco_format(outputs, imgs, paths, shapes, self.ids))
-
- # for tensorboard visualization, maximum images to show: 8
- if i == 0:
- vis_num = min(len(imgs), 8)
- vis_outputs = outputs[:vis_num]
- vis_paths = paths[:vis_num]
-
- if not self.do_pr_metric:
- continue
-
- # Statistics per image
- # This code is based on
- # https://github.com/ultralytics/yolov5/blob/master/val.py
- for si, pred in enumerate(eval_outputs):
- labels = targets[targets[:, 0] == si, 1:]
- nl = len(labels)
- tcls = labels[:, 0].tolist() if nl else [] # target class
- seen += 1
-
- if len(pred) == 0:
- if nl:
- stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
- continue
-
- # Predictions
- predn = pred.clone()
- self.scale_coords(imgs[si].shape[1:], predn[:, :4], shapes[si][0], shapes[si][1]) # native-space pred
-
- # Assign all predictions as incorrect
- correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool)
- if nl:
-
- from yolov6.utils.nms import xywh2xyxy
-
- # target boxes
- tbox = xywh2xyxy(labels[:, 1:5])
- tbox[:, [0, 2]] *= imgs[si].shape[1:][1]
- tbox[:, [1, 3]] *= imgs[si].shape[1:][0]
-
- self.scale_coords(imgs[si].shape[1:], tbox, shapes[si][0], shapes[si][1]) # native-space labels
-
- labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels
-
- from yolov6.utils.metrics import process_batch
-
- correct = process_batch(predn, labelsn, iouv)
- if self.plot_confusion_matrix:
- confusion_matrix.process_batch(predn, labelsn)
-
- # Append statistics (correct, conf, pcls, tcls)
- stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))
-
- if self.do_pr_metric:
- # Compute statistics
- stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
- if len(stats) and stats[0].any():
-
- from yolov6.utils.metrics import ap_per_class
- p, r, ap, f1, ap_class = ap_per_class(*stats, plot=self.plot_curve, save_dir=self.save_dir, names=model.names)
- AP50_F1_max_idx = len(f1.mean(0)) - f1.mean(0)[::-1].argmax() -1
- LOGGER.info(f"IOU 50 best mF1 thershold near {AP50_F1_max_idx/1000.0}.")
- ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
- mp, mr, map50, map = p[:, AP50_F1_max_idx].mean(), r[:, AP50_F1_max_idx].mean(), ap50.mean(), ap.mean()
- nt = np.bincount(stats[3].astype(np.int64), minlength=model.nc) # number of targets per class
-
- # Print results
- s = ('%-16s' + '%12s' * 7) % ('Class', 'Images', 'Labels', 'P@.5iou', 'R@.5iou', 'F1@.5iou', 'mAP@.5', 'mAP@.5:.95')
- LOGGER.info(s)
- pf = '%-16s' + '%12i' * 2 + '%12.3g' * 5 # print format
- LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, f1.mean(0)[AP50_F1_max_idx], map50, map))
-
- self.pr_metric_result = (map50, map)
-
- # Print results per class
- if self.verbose and model.nc > 1:
- for i, c in enumerate(ap_class):
- LOGGER.info(pf % (model.names[c], seen, nt[c], p[i, AP50_F1_max_idx], r[i, AP50_F1_max_idx],
- f1[i, AP50_F1_max_idx], ap50[i], ap[i]))
-
- if self.plot_confusion_matrix:
- confusion_matrix.plot(save_dir=self.save_dir, names=list(model.names))
- else:
- LOGGER.info("Calculate metric failed, might check dataset.")
- self.pr_metric_result = (0.0, 0.0)
-
- return pred_results, vis_outputs, vis_paths
-
-
- def eval_model(self, pred_results, model, dataloader, task):
- '''Evaluate models
- For task speed, this function only evaluates the speed of model and outputs inference time.
- For task val, this function evaluates the speed and mAP by pycocotools, and returns
- inference time and mAP value.
- '''
- LOGGER.info(f'\nEvaluating speed.')
- self.eval_speed(task)
-
- if not self.do_coco_metric and self.do_pr_metric:
- return self.pr_metric_result
- LOGGER.info(f'\nEvaluating mAP by pycocotools.')
- if task != 'speed' and len(pred_results):
- if 'anno_path' in self.data:
- anno_json = self.data['anno_path']
- else:
- # generated coco format labels in dataset initialization
- task = 'val' if task == 'train' else task
- if not isinstance(self.data[task], list):
- self.data[task] = [self.data[task]]
- dataset_root = os.path.dirname(os.path.dirname(self.data[task][0]))
- base_name = os.path.basename(self.data[task][0])
- anno_json = os.path.join(dataset_root, 'annotations', f'instances_{base_name}.json')
- pred_json = os.path.join(self.save_dir, "predictions.json")
- LOGGER.info(f'Saving {pred_json}...')
- with open(pred_json, 'w') as f:
- json.dump(pred_results, f)
-
- anno = COCO(anno_json)
- pred = anno.loadRes(pred_json)
- cocoEval = COCOeval(anno, pred, 'bbox')
- if self.is_coco:
- imgIds = [int(os.path.basename(x).split(".")[0])
- for x in dataloader.dataset.img_paths]
- cocoEval.params.imgIds = imgIds
- cocoEval.evaluate()
- cocoEval.accumulate()
-
- #print each class ap from pycocotool result
- if self.verbose:
-
- import copy
- val_dataset_img_count = cocoEval.cocoGt.imgToAnns.__len__()
- val_dataset_anns_count = 0
- label_count_dict = {"images":set(), "anns":0}
- label_count_dicts = [copy.deepcopy(label_count_dict) for _ in range(model.nc)]
- for _, ann_i in cocoEval.cocoGt.anns.items():
- if ann_i["ignore"]:
- continue
- val_dataset_anns_count += 1
- nc_i = self.coco80_to_coco91_class().index(ann_i['category_id']) if self.is_coco else ann_i['category_id']
- label_count_dicts[nc_i]["images"].add(ann_i["image_id"])
- label_count_dicts[nc_i]["anns"] += 1
-
- s = ('%-16s' + '%12s' * 7) % ('Class', 'Labeled_images', 'Labels', 'P@.5iou', 'R@.5iou', 'F1@.5iou', 'mAP@.5', 'mAP@.5:.95')
- LOGGER.info(s)
- #IOU , all p, all cats, all gt, maxdet 100
- coco_p = cocoEval.eval['precision']
- coco_p_all = coco_p[:, :, :, 0, 2]
- map = np.mean(coco_p_all[coco_p_all>-1])
-
- coco_p_iou50 = coco_p[0, :, :, 0, 2]
- map50 = np.mean(coco_p_iou50[coco_p_iou50>-1])
- mp = np.array([np.mean(coco_p_iou50[ii][coco_p_iou50[ii]>-1]) for ii in range(coco_p_iou50.shape[0])])
- mr = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True)
- mf1 = 2 * mp * mr / (mp + mr + 1e-16)
- i = mf1.argmax() # max F1 index
-
- pf = '%-16s' + '%12i' * 2 + '%12.3g' * 5 # print format
- LOGGER.info(pf % ('all', val_dataset_img_count, val_dataset_anns_count, mp[i], mr[i], mf1[i], map50, map))
-
- #compute each class best f1 and corresponding p and r
- for nc_i in range(model.nc):
- coco_p_c = coco_p[:, :, nc_i, 0, 2]
- map = np.mean(coco_p_c[coco_p_c>-1])
-
- coco_p_c_iou50 = coco_p[0, :, nc_i, 0, 2]
- map50 = np.mean(coco_p_c_iou50[coco_p_c_iou50>-1])
- p = coco_p_c_iou50
- r = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True)
- f1 = 2 * p * r / (p + r + 1e-16)
- i = f1.argmax()
- LOGGER.info(pf % (model.names[nc_i], len(label_count_dicts[nc_i]["images"]), label_count_dicts[nc_i]["anns"], p[i], r[i], f1[i], map50, map))
- cocoEval.summarize()
- map, map50 = cocoEval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5)
- # Return results
- model.float() # for training
- if task != 'train':
- LOGGER.info(f"Results saved to {self.save_dir}")
- return (map50, map)
- return (0.0, 0.0)
-
- def eval_speed(self, task):
- '''Evaluate model inference speed.'''
- if task != 'train':
- n_samples = self.speed_result[0].item()
- pre_time, inf_time, nms_time = 1000 * self.speed_result[1:].cpu().numpy() / n_samples
- for n, v in zip(["pre-process", "inference", "NMS"],[pre_time, inf_time, nms_time]):
- LOGGER.info("Average {} time: {:.2f} ms".format(n, v))
-
- def box_convert(self, x):
- '''Convert boxes with shape [n, 4] from [x1, y1, x2, y2] to [x, y, w, h] where x1y1=top-left, x2y2=bottom-right.'''
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
- def scale_coords(self, img1_shape, coords, img0_shape, ratio_pad=None):
- '''Rescale coords (xyxy) from img1_shape to img0_shape.'''
-
- gain = ratio_pad[0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [0, 2]] /= gain[1] # raw x gain
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, [1, 3]] /= gain[0] # y gain
-
- if isinstance(coords, torch.Tensor): # faster individually
- coords[:, 0].clamp_(0, img0_shape[1]) # x1
- coords[:, 1].clamp_(0, img0_shape[0]) # y1
- coords[:, 2].clamp_(0, img0_shape[1]) # x2
- coords[:, 3].clamp_(0, img0_shape[0]) # y2
- else: # np.array (faster grouped)
- coords[:, [0, 2]] = coords[:, [0, 2]].clip(0, img0_shape[1]) # x1, x2
- coords[:, [1, 3]] = coords[:, [1, 3]].clip(0, img0_shape[0]) # y1, y2
- return coords
-
- def convert_to_coco_format(self, outputs, imgs, paths, shapes, ids):
- pred_results = []
- for i, pred in enumerate(outputs):
- if len(pred) == 0:
- continue
- path, shape = Path(paths[i]), shapes[i][0]
- self.scale_coords(imgs[i].shape[1:], pred[:, :4], shape, shapes[i][1])
- image_id = int(path.stem) if self.is_coco else path.stem
- bboxes = self.box_convert(pred[:, 0:4])
- bboxes[:, :2] -= bboxes[:, 2:] / 2
- cls = pred[:, 5]
- scores = pred[:, 4]
- for ind in range(pred.shape[0]):
- category_id = ids[int(cls[ind])]
- bbox = [round(x, 3) for x in bboxes[ind].tolist()]
- score = round(scores[ind].item(), 5)
- pred_data = {
- "image_id": image_id,
- "category_id": category_id,
- "bbox": bbox,
- "score": score
- }
- pred_results.append(pred_data)
- return pred_results
-
- @staticmethod
- def check_task(task):
- if task not in ['train', 'val', 'test', 'speed']:
- raise Exception("task argument error: only support 'train' / 'val' / 'test' / 'speed' task.")
-
- @staticmethod
- def check_thres(conf_thres, iou_thres, task):
- '''Check whether confidence and iou threshold are best for task val/speed'''
- if task != 'train':
- if task == 'val' or task == 'test':
- if conf_thres > 0.03:
- LOGGER.warning(f'The best conf_thresh when evaluate the model is less than 0.03, while you set it to: {conf_thres}')
- if iou_thres != 0.65:
- LOGGER.warning(f'The best iou_thresh when evaluate the model is 0.65, while you set it to: {iou_thres}')
- if task == 'speed' and conf_thres < 0.4:
- LOGGER.warning(f'The best conf_thresh when test the speed of the model is larger than 0.4, while you set it to: {conf_thres}')
-
- @staticmethod
- def reload_device(device, model, task):
- # device = 'cpu' or '0' or '0,1,2,3'
- if task == 'train':
- device = next(model.parameters()).device
- else:
- if device == 'cpu':
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
- elif device:
- os.environ['CUDA_VISIBLE_DEVICES'] = device
- assert torch.cuda.is_available()
- cuda = device != 'cpu' and torch.cuda.is_available()
- device = torch.device('cuda:0' if cuda else 'cpu')
- return device
-
- @staticmethod
- def reload_dataset(data, task='val'):
- with open(data, errors='ignore') as yaml_file:
- data = yaml.safe_load(yaml_file)
- task = 'test' if task == 'test' else 'val'
- path = data.get(task, 'val')
- if not isinstance(path, list):
- path = [path]
- for p in path:
- if not os.path.exists(p):
- raise Exception(f'Dataset path {p} not found.')
- return data
-
- @staticmethod
- def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20,
- 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
- 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58,
- 59, 60, 61, 62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79,
- 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
- def eval_trt(self, engine, stride=32):
- self.stride = stride
- def init_engine(engine):
- import tensorrt as trt
- from collections import namedtuple,OrderedDict
- Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
- logger = trt.Logger(trt.Logger.ERROR)
- trt.init_libnvinfer_plugins(logger, namespace="")
- with open(engine, 'rb') as f, trt.Runtime(logger) as runtime:
- model = runtime.deserialize_cuda_engine(f.read())
- bindings = OrderedDict()
- for index in range(model.num_bindings):
- name = model.get_binding_name(index)
- dtype = trt.nptype(model.get_binding_dtype(index))
- shape = tuple(model.get_binding_shape(index))
- data = torch.from_numpy(np.empty(shape, dtype=np.dtype(dtype))).to(self.device)
- bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr()))
- binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
- context = model.create_execution_context()
- return context, bindings, binding_addrs, model.get_binding_shape(0)[0]
-
- def init_data(dataloader, task):
- self.is_coco = self.data.get("is_coco", False)
- self.ids = self.coco80_to_coco91_class() if self.is_coco else list(range(1000))
- pad = 0.0
- dataloader = create_dataloader(self.data[task if task in ('train', 'val', 'test') else 'val'],
- self.img_size, self.batch_size, self.stride, check_labels=True, pad=pad, rect=False,
- data_dict=self.data, task=task)[0]
- return dataloader
-
- def convert_to_coco_format_trt(nums, boxes, scores, classes, paths, shapes, ids):
- pred_results = []
- for i, (num, detbox, detscore, detcls) in enumerate(zip(nums, boxes, scores, classes)):
- n = int(num[0])
- if n == 0:
- continue
- path, shape = Path(paths[i]), shapes[i][0]
- gain = shapes[i][1][0][0]
- pad = torch.tensor(shapes[i][1][1]*2).to(self.device)
- detbox = detbox[:n, :]
- detbox -= pad
- detbox /= gain
- detbox[:, 0].clamp_(0, shape[1])
- detbox[:, 1].clamp_(0, shape[0])
- detbox[:, 2].clamp_(0, shape[1])
- detbox[:, 3].clamp_(0, shape[0])
- detbox[:,2:] = detbox[:,2:] - detbox[:,:2]
- detscore = detscore[:n]
- detcls = detcls[:n]
-
- image_id = int(path.stem) if path.stem.isnumeric() else path.stem
-
- for ind in range(n):
- category_id = ids[int(detcls[ind])]
- bbox = [round(x, 3) for x in detbox[ind].tolist()]
- score = round(detscore[ind].item(), 5)
- pred_data = {
- "image_id": image_id,
- "category_id": category_id,
- "bbox": bbox,
- "score": score
- }
- pred_results.append(pred_data)
- return pred_results
-
- context, bindings, binding_addrs, trt_batch_size = init_engine(engine)
- assert trt_batch_size >= self.batch_size, f'The batch size you set is {self.batch_size}, it must <= tensorrt binding batch size {trt_batch_size}.'
- tmp = torch.randn(self.batch_size, 3, self.img_size, self.img_size).to(self.device)
- # warm up for 10 times
- for _ in range(10):
- binding_addrs['images'] = int(tmp.data_ptr())
- context.execute_v2(list(binding_addrs.values()))
- dataloader = init_data(None,'val')
- self.speed_result = torch.zeros(4, device=self.device)
- pred_results = []
- pbar = tqdm(dataloader, desc="Inferencing model in validation dataset.", ncols=NCOLS)
- for imgs, targets, paths, shapes in pbar:
- nb_img = imgs.shape[0]
- if nb_img != self.batch_size:
- # pad to tensorrt model setted batch size
- zeros = torch.zeros(self.batch_size - nb_img, 3, *imgs.shape[2:])
- imgs = torch.cat([imgs, zeros],0)
- t1 = time_sync()
- imgs = imgs.to(self.device, non_blocking=True)
- # preprocess
- imgs = imgs.float()
- imgs /= 255
-
- self.speed_result[1] += time_sync() - t1 # pre-process time
-
- # inference
- t2 = time_sync()
- binding_addrs['images'] = int(imgs.data_ptr())
- context.execute_v2(list(binding_addrs.values()))
- # in the last batch, the nb_img may less than the batch size, so we need to fetch the valid detect results by [:nb_img]
- nums = bindings['num_dets'].data[:nb_img]
- boxes = bindings['det_boxes'].data[:nb_img]
- scores = bindings['det_scores'].data[:nb_img]
- classes = bindings['det_classes'].data[:nb_img]
- self.speed_result[2] += time_sync() - t2 # inference time
-
- self.speed_result[3] += 0
- pred_results.extend(convert_to_coco_format_trt(nums, boxes, scores, classes, paths, shapes, self.ids))
- self.speed_result[0] += self.batch_size
- return dataloader, pred_results
diff --git a/cv/detection/yolov6/pytorch/yolov6/core/inferer.py b/cv/detection/yolov6/pytorch/yolov6/core/inferer.py
deleted file mode 100644
index cea6586de6374c5158e9a2313d8d7e27e38c15f8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/core/inferer.py
+++ /dev/null
@@ -1,295 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import cv2
-import time
-import math
-import torch
-import numpy as np
-import os.path as osp
-
-from tqdm import tqdm
-from pathlib import Path
-from PIL import ImageFont
-from collections import deque
-
-from yolov6.utils.events import LOGGER, load_yaml
-from yolov6.layers.common import DetectBackend
-from yolov6.data.data_augment import letterbox
-from yolov6.data.datasets import LoadData
-from yolov6.utils.nms import non_max_suppression
-from yolov6.utils.torch_utils import get_model_info
-
-class Inferer:
- def __init__(self, source, webcam, webcam_addr, weights, device, yaml, img_size, half):
-
- self.__dict__.update(locals())
-
- # Init model
- self.device = device
- self.img_size = img_size
- cuda = self.device != 'cpu' and torch.cuda.is_available()
- self.device = torch.device(f'cuda:{device}' if cuda else 'cpu')
- self.model = DetectBackend(weights, device=self.device)
- self.stride = self.model.stride
- self.class_names = load_yaml(yaml)['names']
- self.img_size = self.check_img_size(self.img_size, s=self.stride) # check image size
- self.half = half
-
- # Switch model to deploy status
- self.model_switch(self.model.model, self.img_size)
-
- # Half precision
- if self.half & (self.device.type != 'cpu'):
- self.model.model.half()
- else:
- self.model.model.float()
- self.half = False
-
- if self.device.type != 'cpu':
- self.model(torch.zeros(1, 3, *self.img_size).to(self.device).type_as(next(self.model.model.parameters()))) # warmup
-
- # Load data
- self.webcam = webcam
- self.webcam_addr = webcam_addr
- self.files = LoadData(source, webcam, webcam_addr)
- self.source = source
-
-
- def model_switch(self, model, img_size):
- ''' Model switch to deploy status '''
- from yolov6.layers.common import RepVGGBlock
- for layer in model.modules():
- if isinstance(layer, RepVGGBlock):
- layer.switch_to_deploy()
- elif isinstance(layer, torch.nn.Upsample) and not hasattr(layer, 'recompute_scale_factor'):
- layer.recompute_scale_factor = None # torch 1.11.0 compatibility
-
- LOGGER.info("Switch model to deploy modality.")
-
- def infer(self, conf_thres, iou_thres, classes, agnostic_nms, max_det, save_dir, save_txt, save_img, hide_labels, hide_conf, view_img=True):
- ''' Model Inference and results visualization '''
- vid_path, vid_writer, windows = None, None, []
- fps_calculator = CalcFPS()
- for img_src, img_path, vid_cap in tqdm(self.files):
- img, img_src = self.process_image(img_src, self.img_size, self.stride, self.half)
- img = img.to(self.device)
- if len(img.shape) == 3:
- img = img[None]
- # expand for batch dim
- t1 = time.time()
- pred_results = self.model(img)
- det = non_max_suppression(pred_results, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)[0]
- t2 = time.time()
-
- if self.webcam:
- save_path = osp.join(save_dir, self.webcam_addr)
- txt_path = osp.join(save_dir, self.webcam_addr)
- else:
- # Create output files in nested dirs that mirrors the structure of the images' dirs
- rel_path = osp.relpath(osp.dirname(img_path), osp.dirname(self.source))
- save_path = osp.join(save_dir, rel_path, osp.basename(img_path)) # im.jpg
- txt_path = osp.join(save_dir, rel_path, 'labels', osp.splitext(osp.basename(img_path))[0])
- os.makedirs(osp.join(save_dir, rel_path), exist_ok=True)
-
- gn = torch.tensor(img_src.shape)[[1, 0, 1, 0]] # normalization gain whwh
- img_ori = img_src.copy()
-
- # check image and font
- assert img_ori.data.contiguous, 'Image needs to be contiguous. Please apply to input images with np.ascontiguousarray(im).'
- self.font_check()
-
- if len(det):
- det[:, :4] = self.rescale(img.shape[2:], det[:, :4], img_src.shape).round()
- for *xyxy, conf, cls in reversed(det):
- if save_txt: # Write to file
- xywh = (self.box_convert(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf)
- with open(txt_path + '.txt', 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- if save_img:
- class_num = int(cls) # integer class
- label = None if hide_labels else (self.class_names[class_num] if hide_conf else f'{self.class_names[class_num]} {conf:.2f}')
-
- self.plot_box_and_label(img_ori, max(round(sum(img_ori.shape) / 2 * 0.003), 2), xyxy, label, color=self.generate_colors(class_num, True))
-
- img_src = np.asarray(img_ori)
-
- # FPS counter
- fps_calculator.update(1.0 / (t2 - t1))
- avg_fps = fps_calculator.accumulate()
-
- if self.files.type == 'video':
- self.draw_text(
- img_src,
- f"FPS: {avg_fps:0.1f}",
- pos=(20, 20),
- font_scale=1.0,
- text_color=(204, 85, 17),
- text_color_bg=(255, 255, 255),
- font_thickness=2,
- )
-
- if view_img:
- if img_path not in windows:
- windows.append(img_path)
- cv2.namedWindow(str(img_path), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
- cv2.resizeWindow(str(img_path), img_src.shape[1], img_src.shape[0])
- cv2.imshow(str(img_path), img_src)
- cv2.waitKey(1) # 1 millisecond
-
- # Save results (image with detections)
- if save_img:
- if self.files.type == 'image':
- cv2.imwrite(save_path, img_src)
- else: # 'video' or 'stream'
- if vid_path != save_path: # new video
- vid_path = save_path
- if isinstance(vid_writer, cv2.VideoWriter):
- vid_writer.release() # release previous video writer
- if vid_cap: # video
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- else: # stream
- fps, w, h = 30, img_ori.shape[1], img_ori.shape[0]
- save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos
- vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
- vid_writer.write(img_src)
-
- @staticmethod
- def process_image(img_src, img_size, stride, half):
- '''Process image before image inference.'''
- image = letterbox(img_src, img_size, stride=stride)[0]
- # Convert
- image = image.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
- image = torch.from_numpy(np.ascontiguousarray(image))
- image = image.half() if half else image.float() # uint8 to fp16/32
- image /= 255 # 0 - 255 to 0.0 - 1.0
-
- return image, img_src
-
- @staticmethod
- def rescale(ori_shape, boxes, target_shape):
- '''Rescale the output to the original image shape'''
- ratio = min(ori_shape[0] / target_shape[0], ori_shape[1] / target_shape[1])
- padding = (ori_shape[1] - target_shape[1] * ratio) / 2, (ori_shape[0] - target_shape[0] * ratio) / 2
-
- boxes[:, [0, 2]] -= padding[0]
- boxes[:, [1, 3]] -= padding[1]
- boxes[:, :4] /= ratio
-
- boxes[:, 0].clamp_(0, target_shape[1]) # x1
- boxes[:, 1].clamp_(0, target_shape[0]) # y1
- boxes[:, 2].clamp_(0, target_shape[1]) # x2
- boxes[:, 3].clamp_(0, target_shape[0]) # y2
-
- return boxes
-
- def check_img_size(self, img_size, s=32, floor=0):
- """Make sure image size is a multiple of stride s in each dimension, and return a new shape list of image."""
- if isinstance(img_size, int): # integer i.e. img_size=640
- new_size = max(self.make_divisible(img_size, int(s)), floor)
- elif isinstance(img_size, list): # list i.e. img_size=[640, 480]
- new_size = [max(self.make_divisible(x, int(s)), floor) for x in img_size]
- else:
- raise Exception(f"Unsupported type of img_size: {type(img_size)}")
-
- if new_size != img_size:
- print(f'WARNING: --img-size {img_size} must be multiple of max stride {s}, updating to {new_size}')
- return new_size if isinstance(img_size,list) else [new_size]*2
-
- def make_divisible(self, x, divisor):
- # Upward revision the value x to make it evenly divisible by the divisor.
- return math.ceil(x / divisor) * divisor
-
- @staticmethod
- def draw_text(
- img,
- text,
- font=cv2.FONT_HERSHEY_SIMPLEX,
- pos=(0, 0),
- font_scale=1,
- font_thickness=2,
- text_color=(0, 255, 0),
- text_color_bg=(0, 0, 0),
- ):
-
- offset = (5, 5)
- x, y = pos
- text_size, _ = cv2.getTextSize(text, font, font_scale, font_thickness)
- text_w, text_h = text_size
- rec_start = tuple(x - y for x, y in zip(pos, offset))
- rec_end = tuple(x + y for x, y in zip((x + text_w, y + text_h), offset))
- cv2.rectangle(img, rec_start, rec_end, text_color_bg, -1)
- cv2.putText(
- img,
- text,
- (x, int(y + text_h + font_scale - 1)),
- font,
- font_scale,
- text_color,
- font_thickness,
- cv2.LINE_AA,
- )
-
- return text_size
-
- @staticmethod
- def plot_box_and_label(image, lw, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255), font=cv2.FONT_HERSHEY_COMPLEX):
- # Add one xyxy box to image with label
- p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3]))
- cv2.rectangle(image, p1, p2, color, thickness=lw, lineType=cv2.LINE_AA)
- if label:
- tf = max(lw - 1, 1) # font thickness
- w, h = cv2.getTextSize(label, 0, fontScale=lw / 3, thickness=tf)[0] # text width, height
- outside = p1[1] - h - 3 >= 0 # label fits outside box
- p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3
- cv2.rectangle(image, p1, p2, color, -1, cv2.LINE_AA) # filled
- cv2.putText(image, label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), font, lw / 3, txt_color,
- thickness=tf, lineType=cv2.LINE_AA)
-
- @staticmethod
- def font_check(font='./yolov6/utils/Arial.ttf', size=10):
- # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary
- assert osp.exists(font), f'font path not exists: {font}'
- try:
- return ImageFont.truetype(str(font) if font.exists() else font.name, size)
- except Exception as e: # download if missing
- return ImageFont.truetype(str(font), size)
-
- @staticmethod
- def box_convert(x):
- # Convert boxes with shape [n, 4] from [x1, y1, x2, y2] to [x, y, w, h] where x1y1=top-left, x2y2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
- @staticmethod
- def generate_colors(i, bgr=False):
- hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB',
- '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7')
- palette = []
- for iter in hex:
- h = '#' + iter
- palette.append(tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)))
- num = len(palette)
- color = palette[int(i) % num]
- return (color[2], color[1], color[0]) if bgr else color
-
-class CalcFPS:
- def __init__(self, nsamples: int = 50):
- self.framerate = deque(maxlen=nsamples)
-
- def update(self, duration: float):
- self.framerate.append(duration)
-
- def accumulate(self):
- if len(self.framerate) > 1:
- return np.average(self.framerate)
- else:
- return 0.0
diff --git a/cv/detection/yolov6/pytorch/yolov6/data/data_augment.py b/cv/detection/yolov6/pytorch/yolov6/data/data_augment.py
deleted file mode 100644
index 45df88e6487a696b95770a24685a70340e6f413a..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/data/data_augment.py
+++ /dev/null
@@ -1,208 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# This code is based on
-# https://github.com/ultralytics/yolov5/blob/master/utils/dataloaders.py
-
-import math
-import random
-
-import cv2
-import numpy as np
-
-
-def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
- '''HSV color-space augmentation.'''
- if hgain or sgain or vgain:
- r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
- hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))
- dtype = im.dtype # uint8
-
- x = np.arange(0, 256, dtype=r.dtype)
- lut_hue = ((x * r[0]) % 180).astype(dtype)
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
-
- im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))
- cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed
-
-
-def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleup=True, stride=32):
- '''Resize and pad image while meeting stride-multiple constraints.'''
- shape = im.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
- elif isinstance(new_shape, list) and len(new_shape) == 1:
- new_shape = (new_shape[0], new_shape[0])
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better val mAP)
- r = min(r, 1.0)
-
- # Compute padding
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
-
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
-
- return im, r, (left, top)
-
-
-def mixup(im, labels, im2, labels2):
- '''Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf.'''
- r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
- im = (im * r + im2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
- return im, labels
-
-
-def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
- '''Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio.'''
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
- ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
- return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
-
-
-def random_affine(img, labels=(), degrees=10, translate=.1, scale=.1, shear=10,
- new_shape=(640, 640)):
- '''Applies Random affine transformation.'''
- n = len(labels)
- if isinstance(new_shape, int):
- height = width = new_shape
- else:
- height, width = new_shape
-
- M, s = get_transform_matrix(img.shape[:2], (height, width), degrees, scale, shear, translate)
- if (M != np.eye(3)).any(): # image changed
- img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
-
- # Transform label coordinates
- if n:
- new = np.zeros((n, 4))
-
- xy = np.ones((n * 4, 3))
- xy[:, :2] = labels[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
- xy = xy @ M.T # transform
- xy = xy[:, :2].reshape(n, 8) # perspective rescale or affine
-
- # create new boxes
- x = xy[:, [0, 2, 4, 6]]
- y = xy[:, [1, 3, 5, 7]]
- new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
-
- # clip
- new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
- new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
-
- # filter candidates
- i = box_candidates(box1=labels[:, 1:5].T * s, box2=new.T, area_thr=0.1)
- labels = labels[i]
- labels[:, 1:5] = new[i]
-
- return img, labels
-
-
-def get_transform_matrix(img_shape, new_shape, degrees, scale, shear, translate):
- new_height, new_width = new_shape
- # Center
- C = np.eye(3)
- C[0, 2] = -img_shape[1] / 2 # x translation (pixels)
- C[1, 2] = -img_shape[0] / 2 # y translation (pixels)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
- S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * new_width # x translation (pixels)
- T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * new_height # y transla ion (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ C # order of operations (right to left) is IMPORTANT
- return M, s
-
-
-def mosaic_augmentation(shape, imgs, hs, ws, labels, hyp, specific_shape = False, target_height=640, target_width=640):
- '''Applies Mosaic augmentation.'''
- assert len(imgs) == 4, "Mosaic augmentation of current version only supports 4 images."
- labels4 = []
- if not specific_shape:
- if isinstance(shape, list) or isinstance(shape, np.ndarray):
- target_height, target_width = shape
- else:
- target_height = target_width = shape
-
- yc, xc = (int(random.uniform(x//2, 3*x//2)) for x in (target_height, target_width) ) # mosaic center x, y
-
- for i in range(len(imgs)):
- # Load image
- img, h, w = imgs[i], hs[i], ws[i]
- # place img in img4
- if i == 0: # top left
- img4 = np.full((target_height * 2, target_width * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
-
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, target_width * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(target_height * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, target_width * 2), min(target_height * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- labels_per_img = labels[i].copy()
- if labels_per_img.size:
- boxes = np.copy(labels_per_img[:, 1:])
- boxes[:, 0] = w * (labels_per_img[:, 1] - labels_per_img[:, 3] / 2) + padw # top left x
- boxes[:, 1] = h * (labels_per_img[:, 2] - labels_per_img[:, 4] / 2) + padh # top left y
- boxes[:, 2] = w * (labels_per_img[:, 1] + labels_per_img[:, 3] / 2) + padw # bottom right x
- boxes[:, 3] = h * (labels_per_img[:, 2] + labels_per_img[:, 4] / 2) + padh # bottom right y
- labels_per_img[:, 1:] = boxes
-
- labels4.append(labels_per_img)
-
- # Concat/clip labels
- labels4 = np.concatenate(labels4, 0)
- # for x in (labels4[:, 1:]):
- # np.clip(x, 0, 2 * s, out=x)
- labels4[:, 1::2] = np.clip(labels4[:, 1::2], 0, 2 * target_width)
- labels4[:, 2::2] = np.clip(labels4[:, 2::2], 0, 2 * target_height)
-
- # Augment
- img4, labels4 = random_affine(img4, labels4,
- degrees=hyp['degrees'],
- translate=hyp['translate'],
- scale=hyp['scale'],
- shear=hyp['shear'],
- new_shape=(target_height, target_width))
-
- return img4, labels4
diff --git a/cv/detection/yolov6/pytorch/yolov6/data/data_load.py b/cv/detection/yolov6/pytorch/yolov6/data/data_load.py
deleted file mode 100644
index e68e8d710a64ff803037fef341149b997b9ad047..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/data/data_load.py
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# This code is based on
-# https://github.com/ultralytics/yolov5/blob/master/utils/dataloaders.py
-
-import os
-import torch.distributed as dist
-from torch.utils.data import dataloader, distributed
-
-from .datasets import TrainValDataset
-from yolov6.utils.events import LOGGER
-from yolov6.utils.torch_utils import torch_distributed_zero_first
-
-
-def create_dataloader(
- path,
- img_size,
- batch_size,
- stride,
- hyp=None,
- augment=False,
- check_images=False,
- check_labels=False,
- pad=0.0,
- rect=False,
- rank=-1,
- workers=8,
- shuffle=False,
- data_dict=None,
- task="Train",
- specific_shape=False,
- height=1088,
- width=1920
-
-):
- """Create general dataloader.
-
- Returns dataloader and dataset
- """
- if rect and shuffle:
- LOGGER.warning(
- "WARNING: --rect is incompatible with DataLoader shuffle, setting shuffle=False"
- )
- shuffle = False
- with torch_distributed_zero_first(rank):
- dataset = TrainValDataset(
- path,
- img_size,
- batch_size,
- augment=augment,
- hyp=hyp,
- rect=rect,
- check_images=check_images,
- check_labels=check_labels,
- stride=int(stride),
- pad=pad,
- rank=rank,
- data_dict=data_dict,
- task=task,
- specific_shape = specific_shape,
- height=height,
- width=width
- )
-
- batch_size = min(batch_size, len(dataset))
- workers = min(
- [
- os.cpu_count() // int(os.getenv("WORLD_SIZE", 1)),
- batch_size if batch_size > 1 else 0,
- workers,
- ]
- ) # number of workers
- # in DDP mode, if GPU number is greater than 1, and set rect=True,
- # DistributedSampler will sample from start if the last samples cannot be assigned equally to each
- # GPU process, this might cause shape difference in one batch, such as (384,640,3) and (416,640,3)
- # will cause exception in collate function of torch.stack.
- drop_last = rect and dist.is_initialized() and dist.get_world_size() > 1
- sampler = (
- None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle, drop_last=drop_last)
- )
- return (
- TrainValDataLoader(
- dataset,
- batch_size=batch_size,
- shuffle=shuffle and sampler is None,
- num_workers=workers,
- sampler=sampler,
- pin_memory=True,
- collate_fn=TrainValDataset.collate_fn,
- ),
- dataset,
- )
-
-
-class TrainValDataLoader(dataloader.DataLoader):
- """Dataloader that reuses workers
-
- Uses same syntax as vanilla DataLoader
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- object.__setattr__(self, "batch_sampler", _RepeatSampler(self.batch_sampler))
- self.iterator = super().__iter__()
-
- def __len__(self):
- return len(self.batch_sampler.sampler)
-
- def __iter__(self):
- for i in range(len(self)):
- yield next(self.iterator)
-
-
-class _RepeatSampler:
- """Sampler that repeats forever
-
- Args:
- sampler (Sampler)
- """
-
- def __init__(self, sampler):
- self.sampler = sampler
-
- def __iter__(self):
- while True:
- yield from iter(self.sampler)
diff --git a/cv/detection/yolov6/pytorch/yolov6/data/datasets.py b/cv/detection/yolov6/pytorch/yolov6/data/datasets.py
deleted file mode 100644
index a5b8bc05e0b9a3e41c21fd6fc665310609411f88..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/data/datasets.py
+++ /dev/null
@@ -1,664 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-import glob
-from io import UnsupportedOperation
-import os
-import os.path as osp
-import random
-import json
-import time
-import hashlib
-from pathlib import Path
-
-from multiprocessing.pool import Pool
-
-import cv2
-import numpy as np
-from tqdm import tqdm
-from PIL import ExifTags, Image, ImageOps
-
-import torch
-from torch.utils.data import Dataset
-import torch.distributed as dist
-
-from .data_augment import (
- augment_hsv,
- letterbox,
- mixup,
- random_affine,
- mosaic_augmentation,
-)
-from yolov6.utils.events import LOGGER
-
-
-# Parameters
-IMG_FORMATS = ["bmp", "jpg", "jpeg", "png", "tif", "tiff", "dng", "webp", "mpo"]
-VID_FORMATS = ["mp4", "mov", "avi", "mkv"]
-IMG_FORMATS.extend([f.upper() for f in IMG_FORMATS])
-VID_FORMATS.extend([f.upper() for f in VID_FORMATS])
-# Get orientation exif tag
-for k, v in ExifTags.TAGS.items():
- if v == "Orientation":
- ORIENTATION = k
- break
-
-def img2label_paths(img_paths):
- # Define label paths as a function of image paths
- sa, sb = f'{os.sep}images{os.sep}', f'{os.sep}labels{os.sep}' # /images/, /labels/ substrings
- return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths]
-
-class TrainValDataset(Dataset):
- '''YOLOv6 train_loader/val_loader, loads images and labels for training and validation.'''
- def __init__(
- self,
- img_dir,
- img_size=640,
- batch_size=16,
- augment=False,
- hyp=None,
- rect=False,
- check_images=False,
- check_labels=False,
- stride=32,
- pad=0.0,
- rank=-1,
- data_dict=None,
- task="train",
- specific_shape = False,
- height=1088,
- width=1920
-
- ):
- assert task.lower() in ("train", "val", "test", "speed"), f"Not supported task: {task}"
- t1 = time.time()
- self.__dict__.update(locals())
- self.main_process = self.rank in (-1, 0)
- self.task = self.task.capitalize()
- self.class_names = data_dict["names"]
- self.img_paths, self.labels = self.get_imgs_labels(self.img_dir)
- self.rect = rect
- self.specific_shape = specific_shape
- self.target_height = height
- self.target_width = width
- if self.rect:
- shapes = [self.img_info[p]["shape"] for p in self.img_paths]
- self.shapes = np.array(shapes, dtype=np.float64)
- if dist.is_initialized():
- # in DDP mode, we need to make sure all images within batch_size * gpu_num
- # will resized and padded to same shape.
- sample_batch_size = self.batch_size * dist.get_world_size()
- else:
- sample_batch_size = self.batch_size
- self.batch_indices = np.floor(
- np.arange(len(shapes)) / sample_batch_size
- ).astype(
- np.int_
- ) # batch indices of each image
-
- self.sort_files_shapes()
-
- t2 = time.time()
- if self.main_process:
- LOGGER.info(f"%.1fs for dataset initialization." % (t2 - t1))
-
- def __len__(self):
- """Get the length of dataset"""
- return len(self.img_paths)
-
- def __getitem__(self, index):
- """Fetching a data sample for a given key.
- This function applies mosaic and mixup augments during training.
- During validation, letterbox augment is applied.
- """
- target_shape = (
- (self.target_height, self.target_width) if self.specific_shape else
- self.batch_shapes[self.batch_indices[index]] if self.rect
- else self.img_size
- )
-
- # Mosaic Augmentation
- if self.augment and random.random() < self.hyp["mosaic"]:
- img, labels = self.get_mosaic(index, target_shape)
- shapes = None
-
- # MixUp augmentation
- if random.random() < self.hyp["mixup"]:
- img_other, labels_other = self.get_mosaic(
- random.randint(0, len(self.img_paths) - 1), target_shape
- )
- img, labels = mixup(img, labels, img_other, labels_other)
-
- else:
- # Load image
- if self.hyp and "shrink_size" in self.hyp:
- img, (h0, w0), (h, w) = self.load_image(index, self.hyp["shrink_size"])
- else:
- img, (h0, w0), (h, w) = self.load_image(index)
-
- # letterbox
- img, ratio, pad = letterbox(img, target_shape, auto=False, scaleup=self.augment)
- shapes = (h0, w0), ((h * ratio / h0, w * ratio / w0), pad) # for COCO mAP rescaling
-
- labels = self.labels[index].copy()
- if labels.size:
- w *= ratio
- h *= ratio
- # new boxes
- boxes = np.copy(labels[:, 1:])
- boxes[:, 0] = (
- w * (labels[:, 1] - labels[:, 3] / 2) + pad[0]
- ) # top left x
- boxes[:, 1] = (
- h * (labels[:, 2] - labels[:, 4] / 2) + pad[1]
- ) # top left y
- boxes[:, 2] = (
- w * (labels[:, 1] + labels[:, 3] / 2) + pad[0]
- ) # bottom right x
- boxes[:, 3] = (
- h * (labels[:, 2] + labels[:, 4] / 2) + pad[1]
- ) # bottom right y
- labels[:, 1:] = boxes
-
- if self.augment:
- img, labels = random_affine(
- img,
- labels,
- degrees=self.hyp["degrees"],
- translate=self.hyp["translate"],
- scale=self.hyp["scale"],
- shear=self.hyp["shear"],
- new_shape=target_shape,
- )
-
- if len(labels):
- h, w = img.shape[:2]
-
- labels[:, [1, 3]] = labels[:, [1, 3]].clip(0, w - 1e-3) # x1, x2
- labels[:, [2, 4]] = labels[:, [2, 4]].clip(0, h - 1e-3) # y1, y2
-
- boxes = np.copy(labels[:, 1:])
- boxes[:, 0] = ((labels[:, 1] + labels[:, 3]) / 2) / w # x center
- boxes[:, 1] = ((labels[:, 2] + labels[:, 4]) / 2) / h # y center
- boxes[:, 2] = (labels[:, 3] - labels[:, 1]) / w # width
- boxes[:, 3] = (labels[:, 4] - labels[:, 2]) / h # height
- labels[:, 1:] = boxes
-
- if self.augment:
- img, labels = self.general_augment(img, labels)
-
- labels_out = torch.zeros((len(labels), 6))
- if len(labels):
- labels_out[:, 1:] = torch.from_numpy(labels)
-
- # Convert
- img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB
- img = np.ascontiguousarray(img)
-
- return torch.from_numpy(img), labels_out, self.img_paths[index], shapes
-
- def load_image(self, index, shrink_size=None):
- """Load image.
- This function loads image by cv2, resize original image to target shape(img_size) with keeping ratio.
-
- Returns:
- Image, original shape of image, resized image shape
- """
- path = self.img_paths[index]
- try:
- im = cv2.imread(path)
- assert im is not None, f"opencv cannot read image correctly or {path} not exists"
- except:
- im = cv2.cvtColor(np.asarray(Image.open(path)), cv2.COLOR_RGB2BGR)
- assert im is not None, f"Image Not Found {path}, workdir: {os.getcwd()}"
-
- h0, w0 = im.shape[:2] # origin shape
- if self.specific_shape:
- # keep ratio resize
- ratio = min(self.target_width / w0, self.target_height / h0)
-
- elif shrink_size:
- ratio = (self.img_size - shrink_size) / max(h0, w0)
-
- else:
- ratio = self.img_size / max(h0, w0)
-
- if ratio != 1:
- im = cv2.resize(
- im,
- (int(w0 * ratio), int(h0 * ratio)),
- interpolation=cv2.INTER_AREA
- if ratio < 1 and not self.augment
- else cv2.INTER_LINEAR,
- )
- return im, (h0, w0), im.shape[:2]
-
- @staticmethod
- def collate_fn(batch):
- """Merges a list of samples to form a mini-batch of Tensor(s)"""
- img, label, path, shapes = zip(*batch)
- for i, l in enumerate(label):
- l[:, 0] = i # add target image index for build_targets()
- return torch.stack(img, 0), torch.cat(label, 0), path, shapes
-
- def get_imgs_labels(self, img_dirs):
- if not isinstance(img_dirs, list):
- img_dirs = [img_dirs]
- # we store the cache img file in the first directory of img_dirs
- valid_img_record = osp.join(
- osp.dirname(img_dirs[0]), "." + osp.basename(img_dirs[0]) + "_cache.json"
- )
- NUM_THREADS = min(8, os.cpu_count())
- img_paths = []
- for img_dir in img_dirs:
- assert osp.exists(img_dir), f"{img_dir} is an invalid directory path!"
- img_paths += glob.glob(osp.join(img_dir, "**/*"), recursive=True)
-
- img_paths = sorted(
- p for p in img_paths if p.split(".")[-1].lower() in IMG_FORMATS and os.path.isfile(p)
- )
-
- assert img_paths, f"No images found in {img_dir}."
- img_hash = self.get_hash(img_paths)
- LOGGER.info(f'img record infomation path is:{valid_img_record}')
- if osp.exists(valid_img_record):
- with open(valid_img_record, "r") as f:
- cache_info = json.load(f)
- if "image_hash" in cache_info and cache_info["image_hash"] == img_hash:
- img_info = cache_info["information"]
- else:
- self.check_images = True
- else:
- self.check_images = True
-
- # check images
- if self.check_images and self.main_process:
- img_info = {}
- nc, msgs = 0, [] # number corrupt, messages
- LOGGER.info(
- f"{self.task}: Checking formats of images with {NUM_THREADS} process(es): "
- )
- with Pool(NUM_THREADS) as pool:
- pbar = tqdm(
- pool.imap(TrainValDataset.check_image, img_paths),
- total=len(img_paths),
- )
- for img_path, shape_per_img, nc_per_img, msg in pbar:
- if nc_per_img == 0: # not corrupted
- img_info[img_path] = {"shape": shape_per_img}
- nc += nc_per_img
- if msg:
- msgs.append(msg)
- pbar.desc = f"{nc} image(s) corrupted"
- pbar.close()
- if msgs:
- LOGGER.info("\n".join(msgs))
-
- cache_info = {"information": img_info, "image_hash": img_hash}
- # save valid image paths.
- with open(valid_img_record, "w") as f:
- json.dump(cache_info, f)
-
- # check and load anns
-
- img_paths = list(img_info.keys())
- label_paths = img2label_paths(img_paths)
- assert label_paths, f"No labels found."
- label_hash = self.get_hash(label_paths)
- if "label_hash" not in cache_info or cache_info["label_hash"] != label_hash:
- self.check_labels = True
-
- if self.check_labels:
- cache_info["label_hash"] = label_hash
- nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number corrupt, messages
- LOGGER.info(
- f"{self.task}: Checking formats of labels with {NUM_THREADS} process(es): "
- )
- with Pool(NUM_THREADS) as pool:
- pbar = pool.imap(
- TrainValDataset.check_label_files, zip(img_paths, label_paths)
- )
- pbar = tqdm(pbar, total=len(label_paths)) if self.main_process else pbar
- for (
- img_path,
- labels_per_file,
- nc_per_file,
- nm_per_file,
- nf_per_file,
- ne_per_file,
- msg,
- ) in pbar:
- if nc_per_file == 0:
- img_info[img_path]["labels"] = labels_per_file
- else:
- img_info.pop(img_path)
- nc += nc_per_file
- nm += nm_per_file
- nf += nf_per_file
- ne += ne_per_file
- if msg:
- msgs.append(msg)
- if self.main_process:
- pbar.desc = f"{nf} label(s) found, {nm} label(s) missing, {ne} label(s) empty, {nc} invalid label files"
- if self.main_process:
- pbar.close()
- with open(valid_img_record, "w") as f:
- json.dump(cache_info, f)
- if msgs:
- LOGGER.info("\n".join(msgs))
- if nf == 0:
- LOGGER.warning(
- f"WARNING: No labels found in {osp.dirname(img_paths[0])}. "
- )
-
- if self.task.lower() == "val":
- if self.data_dict.get("is_coco", False): # use original json file when evaluating on coco dataset.
- assert osp.exists(self.data_dict["anno_path"]), "Eval on coco dataset must provide valid path of the annotation file in config file: data/coco.yaml"
- else:
- assert (
- self.class_names
- ), "Class names is required when converting labels to coco format for evaluating."
- save_dir = osp.join(osp.dirname(osp.dirname(img_dirs[0])), "annotations")
- if not osp.exists(save_dir):
- os.mkdir(save_dir)
- save_path = osp.join(
- save_dir, "instances_" + osp.basename(img_dirs[0]) + ".json"
- )
- TrainValDataset.generate_coco_format_labels(
- img_info, self.class_names, save_path
- )
-
- img_paths, labels = list(
- zip(
- *[
- (
- img_path,
- np.array(info["labels"], dtype=np.float32)
- if info["labels"]
- else np.zeros((0, 5), dtype=np.float32),
- )
- for img_path, info in img_info.items()
- ]
- )
- )
- self.img_info = img_info
- LOGGER.info(
- f"{self.task}: Final numbers of valid images: {len(img_paths)}/ labels: {len(labels)}. "
- )
- return img_paths, labels
-
- def get_mosaic(self, index, shape):
- """Gets images and labels after mosaic augments"""
- indices = [index] + random.choices(
- range(0, len(self.img_paths)), k=3
- ) # 3 additional image indices
- random.shuffle(indices)
- imgs, hs, ws, labels = [], [], [], []
- for index in indices:
- img, _, (h, w) = self.load_image(index)
- labels_per_img = self.labels[index]
- imgs.append(img)
- hs.append(h)
- ws.append(w)
- labels.append(labels_per_img)
- img, labels = mosaic_augmentation(shape, imgs, hs, ws, labels, self.hyp, self.specific_shape, self.target_height, self.target_width)
- return img, labels
-
- def general_augment(self, img, labels):
- """Gets images and labels after general augment
- This function applies hsv, random ud-flip and random lr-flips augments.
- """
- nl = len(labels)
-
- # HSV color-space
- augment_hsv(
- img,
- hgain=self.hyp["hsv_h"],
- sgain=self.hyp["hsv_s"],
- vgain=self.hyp["hsv_v"],
- )
-
- # Flip up-down
- if random.random() < self.hyp["flipud"]:
- img = np.flipud(img)
- if nl:
- labels[:, 2] = 1 - labels[:, 2]
-
- # Flip left-right
- if random.random() < self.hyp["fliplr"]:
- img = np.fliplr(img)
- if nl:
- labels[:, 1] = 1 - labels[:, 1]
-
- return img, labels
-
- def sort_files_shapes(self):
- '''Sort by aspect ratio.'''
- batch_num = self.batch_indices[-1] + 1
- s = self.shapes # [height, width]
- ar = s[:, 1] / s[:, 0] # aspect ratio
- irect = ar.argsort()
- self.img_paths = [self.img_paths[i] for i in irect]
- self.labels = [self.labels[i] for i in irect]
- self.shapes = s[irect] # wh
- ar = ar[irect]
-
- # Set training image shapes
- shapes = [[1, 1]] * batch_num
- for i in range(batch_num):
- ari = ar[self.batch_indices == i]
- mini, maxi = ari.min(), ari.max()
- if maxi < 1:
- shapes[i] = [1, maxi]
- elif mini > 1:
- shapes[i] = [1 / mini, 1]
- self.batch_shapes = (
- np.ceil(np.array(shapes) * self.img_size / self.stride + self.pad).astype(
- np.int_
- )
- * self.stride
- )
-
- @staticmethod
- def check_image(im_file):
- '''Verify an image.'''
- nc, msg = 0, ""
- try:
- im = Image.open(im_file)
- im.verify() # PIL verify
- im = Image.open(im_file) # need to reload the image after using verify()
- shape = (im.height, im.width) # (height, width)
- try:
- im_exif = im._getexif()
- if im_exif and ORIENTATION in im_exif:
- rotation = im_exif[ORIENTATION]
- if rotation in (6, 8):
- shape = (shape[1], shape[0])
- except:
- im_exif = None
-
- assert (shape[0] > 9) & (shape[1] > 9), f"image size {shape} <10 pixels"
- assert im.format.lower() in IMG_FORMATS, f"invalid image format {im.format}"
- if im.format.lower() in ("jpg", "jpeg"):
- with open(im_file, "rb") as f:
- f.seek(-2, 2)
- if f.read() != b"\xff\xd9": # corrupt JPEG
- ImageOps.exif_transpose(Image.open(im_file)).save(
- im_file, "JPEG", subsampling=0, quality=100
- )
- msg += f"WARNING: {im_file}: corrupt JPEG restored and saved"
- return im_file, shape, nc, msg
- except Exception as e:
- nc = 1
- msg = f"WARNING: {im_file}: ignoring corrupt image: {e}"
- return im_file, None, nc, msg
-
- @staticmethod
- def check_label_files(args):
- img_path, lb_path = args
- nm, nf, ne, nc, msg = 0, 0, 0, 0, "" # number (missing, found, empty, message
- try:
- if osp.exists(lb_path):
- nf = 1 # label found
- with open(lb_path, "r") as f:
- labels = [
- x.split() for x in f.read().strip().splitlines() if len(x)
- ]
- labels = np.array(labels, dtype=np.float32)
- if len(labels):
- assert all(
- len(l) == 5 for l in labels
- ), f"{lb_path}: wrong label format."
- assert (
- labels >= 0
- ).all(), f"{lb_path}: Label values error: all values in label file must > 0"
- assert (
- labels[:, 1:] <= 1
- ).all(), f"{lb_path}: Label values error: all coordinates must be normalized"
-
- _, indices = np.unique(labels, axis=0, return_index=True)
- if len(indices) < len(labels): # duplicate row check
- labels = labels[indices] # remove duplicates
- msg += f"WARNING: {lb_path}: {len(labels) - len(indices)} duplicate labels removed"
- labels = labels.tolist()
- else:
- ne = 1 # label empty
- labels = []
- else:
- nm = 1 # label missing
- labels = []
-
- return img_path, labels, nc, nm, nf, ne, msg
- except Exception as e:
- nc = 1
- msg = f"WARNING: {lb_path}: ignoring invalid labels: {e}"
- return img_path, None, nc, nm, nf, ne, msg
-
- @staticmethod
- def generate_coco_format_labels(img_info, class_names, save_path):
- # for evaluation with pycocotools
- dataset = {"categories": [], "annotations": [], "images": []}
- for i, class_name in enumerate(class_names):
- dataset["categories"].append(
- {"id": i, "name": class_name, "supercategory": ""}
- )
-
- ann_id = 0
- LOGGER.info(f"Convert to COCO format")
- for i, (img_path, info) in enumerate(tqdm(img_info.items())):
- labels = info["labels"] if info["labels"] else []
- img_id = osp.splitext(osp.basename(img_path))[0]
- img_h, img_w = info["shape"]
- dataset["images"].append(
- {
- "file_name": os.path.basename(img_path),
- "id": img_id,
- "width": img_w,
- "height": img_h,
- }
- )
- if labels:
- for label in labels:
- c, x, y, w, h = label[:5]
- # convert x,y,w,h to x1,y1,x2,y2
- x1 = (x - w / 2) * img_w
- y1 = (y - h / 2) * img_h
- x2 = (x + w / 2) * img_w
- y2 = (y + h / 2) * img_h
- # cls_id starts from 0
- cls_id = int(c)
- w = max(0, x2 - x1)
- h = max(0, y2 - y1)
- dataset["annotations"].append(
- {
- "area": h * w,
- "bbox": [x1, y1, w, h],
- "category_id": cls_id,
- "id": ann_id,
- "image_id": img_id,
- "iscrowd": 0,
- # mask
- "segmentation": [],
- }
- )
- ann_id += 1
-
- with open(save_path, "w") as f:
- json.dump(dataset, f)
- LOGGER.info(
- f"Convert to COCO format finished. Resutls saved in {save_path}"
- )
-
- @staticmethod
- def get_hash(paths):
- """Get the hash value of paths"""
- assert isinstance(paths, list), "Only support list currently."
- h = hashlib.md5("".join(paths).encode())
- return h.hexdigest()
-
-
-class LoadData:
- def __init__(self, path, webcam, webcam_addr):
- self.webcam = webcam
- self.webcam_addr = webcam_addr
- if webcam: # if use web camera
- imgp = []
- vidp = [int(webcam_addr) if webcam_addr.isdigit() else webcam_addr]
- else:
- p = str(Path(path).resolve()) # os-agnostic absolute path
- if os.path.isdir(p):
- files = sorted(glob.glob(os.path.join(p, '**/*.*'), recursive=True)) # dir
- elif os.path.isfile(p):
- files = [p] # files
- else:
- raise FileNotFoundError(f'Invalid path {p}')
- imgp = [i for i in files if i.split('.')[-1] in IMG_FORMATS]
- vidp = [v for v in files if v.split('.')[-1] in VID_FORMATS]
- self.files = imgp + vidp
- self.nf = len(self.files)
- self.type = 'image'
- if len(vidp) > 0:
- self.add_video(vidp[0]) # new video
- else:
- self.cap = None
-
- # @staticmethod
- def checkext(self, path):
- if self.webcam:
- file_type = 'video'
- else:
- file_type = 'image' if path.split('.')[-1].lower() in IMG_FORMATS else 'video'
- return file_type
-
- def __iter__(self):
- self.count = 0
- return self
-
- def __next__(self):
- if self.count == self.nf:
- raise StopIteration
- path = self.files[self.count]
- if self.checkext(path) == 'video':
- self.type = 'video'
- ret_val, img = self.cap.read()
- while not ret_val:
- self.count += 1
- self.cap.release()
- if self.count == self.nf: # last video
- raise StopIteration
- path = self.files[self.count]
- self.add_video(path)
- ret_val, img = self.cap.read()
- else:
- # Read image
- self.count += 1
- img = cv2.imread(path) # BGR
- return img, path, self.cap
-
- def add_video(self, path):
- self.frame = 0
- self.cap = cv2.VideoCapture(path)
- self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
-
- def __len__(self):
- return self.nf # number of files
diff --git a/cv/detection/yolov6/pytorch/yolov6/data/vis_dataset.py b/cv/detection/yolov6/pytorch/yolov6/data/vis_dataset.py
deleted file mode 100644
index 09716ae54e9f9de66642a5fe4634e243f898ac9a..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/data/vis_dataset.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# coding=utf-8
-# Description: visualize yolo label image.
-
-import argparse
-import os
-import cv2
-import numpy as np
-
-IMG_FORMATS = ["bmp", "jpg", "jpeg", "png", "tif", "tiff", "dng", "webp", "mpo"]
-IMG_FORMATS.extend([f.upper() for f in IMG_FORMATS])
-
-
-def main(args):
- img_dir, label_dir, class_names = args.img_dir, args.label_dir, args.class_names
-
- label_map = dict()
- for class_id, classname in enumerate(class_names):
- label_map[class_id] = classname
-
- for file in os.listdir(img_dir):
- if file.split('.')[-1] not in IMG_FORMATS:
- print(f'[Warning]: Non-image file {file}')
- continue
- img_path = os.path.join(img_dir, file)
- label_path = os.path.join(label_dir, file[: file.rindex('.')] + '.txt')
-
- try:
- img_data = cv2.imread(img_path)
- height, width, _ = img_data.shape
- color = [tuple(np.random.choice(range(256), size=3)) for i in class_names]
- thickness = 2
-
- with open(label_path, 'r') as f:
- for bbox in f:
- cls, x_c, y_c, w, h = [float(v) if i > 0 else int(v) for i, v in enumerate(bbox.split('\n')[0].split(' '))]
-
- x_tl = int((x_c - w / 2) * width)
- y_tl = int((y_c - h / 2) * height)
- cv2.rectangle(img_data, (x_tl, y_tl), (x_tl + int(w * width), y_tl + int(h * height)), tuple([int(x) for x in color[cls]]), thickness)
- cv2.putText(img_data, label_map[cls], (x_tl, y_tl - 10), cv2.FONT_HERSHEY_COMPLEX, 1, tuple([int(x) for x in color[cls]]), thickness)
-
- cv2.imshow('image', img_data)
- cv2.waitKey(0)
- except Exception as e:
- print(f'[Error]: {e} {img_path}')
- print('======All Done!======')
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--img_dir', default='VOCdevkit/voc_07_12/images')
- parser.add_argument('--label_dir', default='VOCdevkit/voc_07_12/labels')
- parser.add_argument('--class_names', default=['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
- 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'])
-
- args = parser.parse_args()
- print(args)
-
- main(args)
diff --git a/cv/detection/yolov6/pytorch/yolov6/data/voc2yolo.py b/cv/detection/yolov6/pytorch/yolov6/data/voc2yolo.py
deleted file mode 100644
index 9019e1fcd23b66bc6afab9bb52a60349c79d71c8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/data/voc2yolo.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import xml.etree.ElementTree as ET
-from tqdm import tqdm
-import os
-import shutil
-import argparse
-
-# VOC dataset (refer https://github.com/ultralytics/yolov5/blob/master/data/VOC.yaml)
-# VOC2007 trainval: 446MB, 5012 images
-# VOC2007 test: 438MB, 4953 images
-# VOC2012 trainval: 1.95GB, 17126 images
-
-VOC_NAMES = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
- 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
-
-
-def convert_label(path, lb_path, year, image_id):
- def convert_box(size, box):
- dw, dh = 1. / size[0], 1. / size[1]
- x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
- return x * dw, y * dh, w * dw, h * dh
- in_file = open(os.path.join(path, f'VOC{year}/Annotations/{image_id}.xml'))
- out_file = open(lb_path, 'w')
- tree = ET.parse(in_file)
- root = tree.getroot()
- size = root.find('size')
- w = int(size.find('width').text)
- h = int(size.find('height').text)
- for obj in root.iter('object'):
- cls = obj.find('name').text
- if cls in VOC_NAMES and not int(obj.find('difficult').text) == 1:
- xmlbox = obj.find('bndbox')
- bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])
- cls_id = VOC_NAMES.index(cls) # class id
- out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n')
-
-
-def gen_voc07_12(voc_path):
- '''
- Generate voc07+12 setting dataset:
- train: # train images 16551 images
- - images/train2012
- - images/train2007
- - images/val2012
- - images/val2007
- val: # val images (relative to 'path') 4952 images
- - images/test2007
- '''
- dataset_root = os.path.join(voc_path, 'voc_07_12')
- if not os.path.exists(dataset_root):
- os.makedirs(dataset_root)
-
- dataset_settings = {'train': ['train2007', 'val2007', 'train2012', 'val2012'], 'val':['test2007']}
- for item in ['images', 'labels']:
- for data_type, data_list in dataset_settings.items():
- for data_name in data_list:
- ori_path = os.path.join(voc_path, item, data_name)
- new_path = os.path.join(dataset_root, item, data_type)
- if not os.path.exists(new_path):
- os.makedirs(new_path)
-
- print(f'[INFO]: Copying {ori_path} to {new_path}')
- for file in os.listdir(ori_path):
- shutil.copy(os.path.join(ori_path, file), new_path)
-
-
-def main(args):
- voc_path = args.voc_path
- for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):
- imgs_path = os.path.join(voc_path, 'images', f'{image_set}')
- lbs_path = os.path.join(voc_path, 'labels', f'{image_set}')
-
- try:
- with open(os.path.join(voc_path, f'VOC{year}/ImageSets/Main/{image_set}.txt'), 'r') as f:
- image_ids = f.read().strip().split()
- if not os.path.exists(imgs_path):
- os.makedirs(imgs_path)
- if not os.path.exists(lbs_path):
- os.makedirs(lbs_path)
-
- for id in tqdm(image_ids, desc=f'{image_set}{year}'):
- f = os.path.join(voc_path, f'VOC{year}/JPEGImages/{id}.jpg') # old img path
- lb_path = os.path.join(lbs_path, f'{id}.txt') # new label path
- convert_label(voc_path, lb_path, year, id) # convert labels to YOLO format
- if os.path.exists(f):
- shutil.move(f, imgs_path) # move image
- except Exception as e:
- print(f'[Warning]: {e} {year}{image_set} convert fail!')
-
- gen_voc07_12(voc_path)
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--voc_path', default='VOCdevkit')
-
- args = parser.parse_args()
- print(args)
-
- main(args)
diff --git a/cv/detection/yolov6/pytorch/yolov6/layers/common.py b/cv/detection/yolov6/pytorch/yolov6/layers/common.py
deleted file mode 100644
index c69d9d04a0662a396c7fa48acede586a2cfac270..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/layers/common.py
+++ /dev/null
@@ -1,986 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import warnings
-import numpy as np
-from pathlib import Path
-import torch
-import torch.nn as nn
-import torch.nn.init as init
-from torch.nn.parameter import Parameter
-from yolov6.utils.general import download_ckpt
-
-
-activation_table = {'relu':nn.ReLU(),
- 'silu':nn.SiLU(),
- 'hardswish':nn.Hardswish()
- }
-
-class SiLU(nn.Module):
- '''Activation of SiLU'''
- @staticmethod
- def forward(x):
- return x * torch.sigmoid(x)
-
-
-class ConvModule(nn.Module):
- '''A combination of Conv + BN + Activation'''
- def __init__(self, in_channels, out_channels, kernel_size, stride, activation_type, padding=None, groups=1, bias=False):
- super().__init__()
- if padding is None:
- padding = kernel_size // 2
- self.conv = nn.Conv2d(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- groups=groups,
- bias=bias,
- )
- self.bn = nn.BatchNorm2d(out_channels)
- if activation_type is not None:
- self.act = activation_table.get(activation_type)
- self.activation_type = activation_type
-
- def forward(self, x):
- if self.activation_type is None:
- return self.bn(self.conv(x))
- return self.act(self.bn(self.conv(x)))
-
- def forward_fuse(self, x):
- if self.activation_type is None:
- return self.conv(x)
- return self.act(self.conv(x))
-
-
-class ConvBNReLU(nn.Module):
- '''Conv and BN with ReLU activation'''
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=None, groups=1, bias=False):
- super().__init__()
- self.block = ConvModule(in_channels, out_channels, kernel_size, stride, 'relu', padding, groups, bias)
-
- def forward(self, x):
- return self.block(x)
-
-
-class ConvBNSiLU(nn.Module):
- '''Conv and BN with SiLU activation'''
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=None, groups=1, bias=False):
- super().__init__()
- self.block = ConvModule(in_channels, out_channels, kernel_size, stride, 'silu', padding, groups, bias)
-
- def forward(self, x):
- return self.block(x)
-
-
-class ConvBN(nn.Module):
- '''Conv and BN without activation'''
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=None, groups=1, bias=False):
- super().__init__()
- self.block = ConvModule(in_channels, out_channels, kernel_size, stride, None, padding, groups, bias)
-
- def forward(self, x):
- return self.block(x)
-
-
-class ConvBNHS(nn.Module):
- '''Conv and BN with Hardswish activation'''
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=None, groups=1, bias=False):
- super().__init__()
- self.block = ConvModule(in_channels, out_channels, kernel_size, stride, 'hardswish', padding, groups, bias)
-
- def forward(self, x):
- return self.block(x)
-
-
-class SPPFModule(nn.Module):
-
- def __init__(self, in_channels, out_channels, kernel_size=5, block=ConvBNReLU):
- super().__init__()
- c_ = in_channels // 2 # hidden channels
- self.cv1 = block(in_channels, c_, 1, 1)
- self.cv2 = block(c_ * 4, out_channels, 1, 1)
- self.m = nn.MaxPool2d(kernel_size=kernel_size, stride=1, padding=kernel_size // 2)
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore')
- y1 = self.m(x)
- y2 = self.m(y1)
- return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
-
-
-class SimSPPF(nn.Module):
- '''Simplified SPPF with ReLU activation'''
- def __init__(self, in_channels, out_channels, kernel_size=5, block=ConvBNReLU):
- super().__init__()
- self.sppf = SPPFModule(in_channels, out_channels, kernel_size, block)
-
- def forward(self, x):
- return self.sppf(x)
-
-
-class SPPF(nn.Module):
- '''SPPF with SiLU activation'''
- def __init__(self, in_channels, out_channels, kernel_size=5, block=ConvBNSiLU):
- super().__init__()
- self.sppf = SPPFModule(in_channels, out_channels, kernel_size, block)
-
- def forward(self, x):
- return self.sppf(x)
-
-
-class CSPSPPFModule(nn.Module):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, in_channels, out_channels, kernel_size=5, e=0.5, block=ConvBNReLU):
- super().__init__()
- c_ = int(out_channels * e) # hidden channels
- self.cv1 = block(in_channels, c_, 1, 1)
- self.cv2 = block(in_channels, c_, 1, 1)
- self.cv3 = block(c_, c_, 3, 1)
- self.cv4 = block(c_, c_, 1, 1)
-
- self.m = nn.MaxPool2d(kernel_size=kernel_size, stride=1, padding=kernel_size // 2)
- self.cv5 = block(4 * c_, c_, 1, 1)
- self.cv6 = block(c_, c_, 3, 1)
- self.cv7 = block(2 * c_, out_channels, 1, 1)
-
- def forward(self, x):
- x1 = self.cv4(self.cv3(self.cv1(x)))
- y0 = self.cv2(x)
- with warnings.catch_warnings():
- warnings.simplefilter('ignore')
- y1 = self.m(x1)
- y2 = self.m(y1)
- y3 = self.cv6(self.cv5(torch.cat([x1, y1, y2, self.m(y2)], 1)))
- return self.cv7(torch.cat((y0, y3), dim=1))
-
-
-class SimCSPSPPF(nn.Module):
- '''CSPSPPF with ReLU activation'''
- def __init__(self, in_channels, out_channels, kernel_size=5, e=0.5, block=ConvBNReLU):
- super().__init__()
- self.cspsppf = CSPSPPFModule(in_channels, out_channels, kernel_size, e, block)
-
- def forward(self, x):
- return self.cspsppf(x)
-
-
-class CSPSPPF(nn.Module):
- '''CSPSPPF with SiLU activation'''
- def __init__(self, in_channels, out_channels, kernel_size=5, e=0.5, block=ConvBNSiLU):
- super().__init__()
- self.cspsppf = CSPSPPFModule(in_channels, out_channels, kernel_size, e, block)
-
- def forward(self, x):
- return self.cspsppf(x)
-
-
-class Transpose(nn.Module):
- '''Normal Transpose, default for upsampling'''
- def __init__(self, in_channels, out_channels, kernel_size=2, stride=2):
- super().__init__()
- self.upsample_transpose = torch.nn.ConvTranspose2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=kernel_size,
- stride=stride,
- bias=True
- )
-
- def forward(self, x):
- return self.upsample_transpose(x)
-
-
-class RepVGGBlock(nn.Module):
- '''RepVGGBlock is a basic rep-style block, including training and deploy status
- This code is based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py
- '''
- def __init__(self, in_channels, out_channels, kernel_size=3,
- stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
- super(RepVGGBlock, self).__init__()
- """ Initialization of the class.
- Args:
- in_channels (int): Number of channels in the input image
- out_channels (int): Number of channels produced by the convolution
- kernel_size (int or tuple): Size of the convolving kernel
- stride (int or tuple, optional): Stride of the convolution. Default: 1
- padding (int or tuple, optional): Zero-padding added to both sides of
- the input. Default: 1
- dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
- groups (int, optional): Number of blocked connections from input
- channels to output channels. Default: 1
- padding_mode (string, optional): Default: 'zeros'
- deploy: Whether to be deploy status or training status. Default: False
- use_se: Whether to use se. Default: False
- """
- self.deploy = deploy
- self.groups = groups
- self.in_channels = in_channels
- self.out_channels = out_channels
-
- assert kernel_size == 3
- assert padding == 1
-
- padding_11 = padding - kernel_size // 2
-
- self.nonlinearity = nn.ReLU()
-
- if use_se:
- raise NotImplementedError("se block not supported yet")
- else:
- self.se = nn.Identity()
-
- if deploy:
- self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride,
- padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
-
- else:
- self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) if out_channels == in_channels and stride == 1 else None
- self.rbr_dense = ConvModule(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, activation_type=None, padding=padding, groups=groups)
- self.rbr_1x1 = ConvModule(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, activation_type=None, padding=padding_11, groups=groups)
-
- def forward(self, inputs):
- '''Forward process'''
- if hasattr(self, 'rbr_reparam'):
- return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
-
- if self.rbr_identity is None:
- id_out = 0
- else:
- id_out = self.rbr_identity(inputs)
-
- return self.nonlinearity(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out))
-
- def get_equivalent_kernel_bias(self):
- kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
- kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
- kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
- return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
-
- def _avg_to_3x3_tensor(self, avgp):
- channels = self.in_channels
- groups = self.groups
- kernel_size = avgp.kernel_size
- input_dim = channels // groups
- k = torch.zeros((channels, input_dim, kernel_size, kernel_size))
- k[np.arange(channels), np.tile(np.arange(input_dim), groups), :, :] = 1.0 / kernel_size ** 2
- return k
-
- def _pad_1x1_to_3x3_tensor(self, kernel1x1):
- if kernel1x1 is None:
- return 0
- else:
- return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
-
- def _fuse_bn_tensor(self, branch):
- if branch is None:
- return 0, 0
- if isinstance(branch, ConvModule):
- kernel = branch.conv.weight
- bias = branch.conv.bias
- return kernel, bias
- elif isinstance(branch, nn.BatchNorm2d):
- if not hasattr(self, 'id_tensor'):
- input_dim = self.in_channels // self.groups
- kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
- for i in range(self.in_channels):
- kernel_value[i, i % input_dim, 1, 1] = 1
- self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
- kernel = self.id_tensor
- running_mean = branch.running_mean
- running_var = branch.running_var
- gamma = branch.weight
- beta = branch.bias
- eps = branch.eps
- std = (running_var + eps).sqrt()
- t = (gamma / std).reshape(-1, 1, 1, 1)
- return kernel * t, beta - running_mean * gamma / std
-
- def switch_to_deploy(self):
- if hasattr(self, 'rbr_reparam'):
- return
- kernel, bias = self.get_equivalent_kernel_bias()
- self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels, out_channels=self.rbr_dense.conv.out_channels,
- kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
- padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation, groups=self.rbr_dense.conv.groups, bias=True)
- self.rbr_reparam.weight.data = kernel
- self.rbr_reparam.bias.data = bias
- for para in self.parameters():
- para.detach_()
- self.__delattr__('rbr_dense')
- self.__delattr__('rbr_1x1')
- if hasattr(self, 'rbr_identity'):
- self.__delattr__('rbr_identity')
- if hasattr(self, 'id_tensor'):
- self.__delattr__('id_tensor')
- self.deploy = True
-
-
-class QARepVGGBlock(RepVGGBlock):
- """
- RepVGGBlock is a basic rep-style block, including training and deploy status
- This code is based on https://arxiv.org/abs/2212.01593
- """
- def __init__(self, in_channels, out_channels, kernel_size=3,
- stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
- super(QARepVGGBlock, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, groups,
- padding_mode, deploy, use_se)
- if not deploy:
- self.bn = nn.BatchNorm2d(out_channels)
- self.rbr_1x1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, groups=groups, bias=False)
- self.rbr_identity = nn.Identity() if out_channels == in_channels and stride == 1 else None
- self._id_tensor = None
-
- def forward(self, inputs):
- if hasattr(self, 'rbr_reparam'):
- return self.nonlinearity(self.bn(self.se(self.rbr_reparam(inputs))))
-
- if self.rbr_identity is None:
- id_out = 0
- else:
- id_out = self.rbr_identity(inputs)
-
- return self.nonlinearity(self.bn(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)))
-
- def get_equivalent_kernel_bias(self):
- kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
- kernel = kernel3x3 + self._pad_1x1_to_3x3_tensor(self.rbr_1x1.weight)
- bias = bias3x3
-
- if self.rbr_identity is not None:
- input_dim = self.in_channels // self.groups
- kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
- for i in range(self.in_channels):
- kernel_value[i, i % input_dim, 1, 1] = 1
- id_tensor = torch.from_numpy(kernel_value).to(self.rbr_1x1.weight.device)
- kernel = kernel + id_tensor
- return kernel, bias
-
- def _fuse_extra_bn_tensor(self, kernel, bias, branch):
- assert isinstance(branch, nn.BatchNorm2d)
- running_mean = branch.running_mean - bias # remove bias
- running_var = branch.running_var
- gamma = branch.weight
- beta = branch.bias
- eps = branch.eps
- std = (running_var + eps).sqrt()
- t = (gamma / std).reshape(-1, 1, 1, 1)
- return kernel * t, beta - running_mean * gamma / std
-
- def switch_to_deploy(self):
- if hasattr(self, 'rbr_reparam'):
- return
- kernel, bias = self.get_equivalent_kernel_bias()
- self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels, out_channels=self.rbr_dense.conv.out_channels,
- kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
- padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation, groups=self.rbr_dense.conv.groups, bias=True)
- self.rbr_reparam.weight.data = kernel
- self.rbr_reparam.bias.data = bias
- for para in self.parameters():
- para.detach_()
- self.__delattr__('rbr_dense')
- self.__delattr__('rbr_1x1')
- if hasattr(self, 'rbr_identity'):
- self.__delattr__('rbr_identity')
- if hasattr(self, 'id_tensor'):
- self.__delattr__('id_tensor')
- # keep post bn for QAT
- # if hasattr(self, 'bn'):
- # self.__delattr__('bn')
- self.deploy = True
-
-
-class QARepVGGBlockV2(RepVGGBlock):
- """
- RepVGGBlock is a basic rep-style block, including training and deploy status
- This code is based on https://arxiv.org/abs/2212.01593
- """
- def __init__(self, in_channels, out_channels, kernel_size=3,
- stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
- super(QARepVGGBlockV2, self).__init__(in_channels, out_channels, kernel_size, stride, padding, dilation, groups,
- padding_mode, deploy, use_se)
- if not deploy:
- self.bn = nn.BatchNorm2d(out_channels)
- self.rbr_1x1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, groups=groups, bias=False)
- self.rbr_identity = nn.Identity() if out_channels == in_channels and stride == 1 else None
- self.rbr_avg = nn.AvgPool2d(kernel_size=kernel_size, stride=stride, padding=padding) if out_channels == in_channels and stride == 1 else None
- self._id_tensor = None
-
- def forward(self, inputs):
- if hasattr(self, 'rbr_reparam'):
- return self.nonlinearity(self.bn(self.se(self.rbr_reparam(inputs))))
-
- if self.rbr_identity is None:
- id_out = 0
- else:
- id_out = self.rbr_identity(inputs)
- if self.rbr_avg is None:
- avg_out = 0
- else:
- avg_out = self.rbr_avg(inputs)
-
- return self.nonlinearity(self.bn(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out + avg_out)))
-
- def get_equivalent_kernel_bias(self):
- kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
- kernel = kernel3x3 + self._pad_1x1_to_3x3_tensor(self.rbr_1x1.weight)
- if self.rbr_avg is not None:
- kernelavg = self._avg_to_3x3_tensor(self.rbr_avg)
- kernel = kernel + kernelavg.to(self.rbr_1x1.weight.device)
- bias = bias3x3
-
- if self.rbr_identity is not None:
- input_dim = self.in_channels // self.groups
- kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
- for i in range(self.in_channels):
- kernel_value[i, i % input_dim, 1, 1] = 1
- id_tensor = torch.from_numpy(kernel_value).to(self.rbr_1x1.weight.device)
- kernel = kernel + id_tensor
- return kernel, bias
-
- def _fuse_extra_bn_tensor(self, kernel, bias, branch):
- assert isinstance(branch, nn.BatchNorm2d)
- running_mean = branch.running_mean - bias # remove bias
- running_var = branch.running_var
- gamma = branch.weight
- beta = branch.bias
- eps = branch.eps
- std = (running_var + eps).sqrt()
- t = (gamma / std).reshape(-1, 1, 1, 1)
- return kernel * t, beta - running_mean * gamma / std
-
- def switch_to_deploy(self):
- if hasattr(self, 'rbr_reparam'):
- return
- kernel, bias = self.get_equivalent_kernel_bias()
- self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels, out_channels=self.rbr_dense.conv.out_channels,
- kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
- padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation, groups=self.rbr_dense.conv.groups, bias=True)
- self.rbr_reparam.weight.data = kernel
- self.rbr_reparam.bias.data = bias
- for para in self.parameters():
- para.detach_()
- self.__delattr__('rbr_dense')
- self.__delattr__('rbr_1x1')
- if hasattr(self, 'rbr_identity'):
- self.__delattr__('rbr_identity')
- if hasattr(self, 'rbr_avg'):
- self.__delattr__('rbr_avg')
- if hasattr(self, 'id_tensor'):
- self.__delattr__('id_tensor')
- # keep post bn for QAT
- # if hasattr(self, 'bn'):
- # self.__delattr__('bn')
- self.deploy = True
-
-
-class RealVGGBlock(nn.Module):
-
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1,
- dilation=1, groups=1, padding_mode='zeros', use_se=False,
- ):
- super(RealVGGBlock, self).__init__()
- self.relu = nn.ReLU()
- self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
- self.bn = nn.BatchNorm2d(out_channels)
-
- if use_se:
- raise NotImplementedError("se block not supported yet")
- else:
- self.se = nn.Identity()
-
- def forward(self, inputs):
- out = self.relu(self.se(self.bn(self.conv(inputs))))
- return out
-
-
-class ScaleLayer(torch.nn.Module):
-
- def __init__(self, num_features, use_bias=True, scale_init=1.0):
- super(ScaleLayer, self).__init__()
- self.weight = Parameter(torch.Tensor(num_features))
- init.constant_(self.weight, scale_init)
- self.num_features = num_features
- if use_bias:
- self.bias = Parameter(torch.Tensor(num_features))
- init.zeros_(self.bias)
- else:
- self.bias = None
-
- def forward(self, inputs):
- if self.bias is None:
- return inputs * self.weight.view(1, self.num_features, 1, 1)
- else:
- return inputs * self.weight.view(1, self.num_features, 1, 1) + self.bias.view(1, self.num_features, 1, 1)
-
-
-# A CSLA block is a LinearAddBlock with is_csla=True
-class LinearAddBlock(nn.Module):
-
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1,
- dilation=1, groups=1, padding_mode='zeros', use_se=False, is_csla=False, conv_scale_init=1.0):
- super(LinearAddBlock, self).__init__()
- self.in_channels = in_channels
- self.relu = nn.ReLU()
- self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
- self.scale_conv = ScaleLayer(num_features=out_channels, use_bias=False, scale_init=conv_scale_init)
- self.conv_1x1 = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=0, bias=False)
- self.scale_1x1 = ScaleLayer(num_features=out_channels, use_bias=False, scale_init=conv_scale_init)
- if in_channels == out_channels and stride == 1:
- self.scale_identity = ScaleLayer(num_features=out_channels, use_bias=False, scale_init=1.0)
- self.bn = nn.BatchNorm2d(out_channels)
- if is_csla: # Make them constant
- self.scale_1x1.requires_grad_(False)
- self.scale_conv.requires_grad_(False)
- if use_se:
- raise NotImplementedError("se block not supported yet")
- else:
- self.se = nn.Identity()
-
- def forward(self, inputs):
- out = self.scale_conv(self.conv(inputs)) + self.scale_1x1(self.conv_1x1(inputs))
- if hasattr(self, 'scale_identity'):
- out += self.scale_identity(inputs)
- out = self.relu(self.se(self.bn(out)))
- return out
-
-
-class DetectBackend(nn.Module):
- def __init__(self, weights='yolov6s.pt', device=None, dnn=True):
- super().__init__()
- if not os.path.exists(weights):
- download_ckpt(weights) # try to download model from github automatically.
- assert isinstance(weights, str) and Path(weights).suffix == '.pt', f'{Path(weights).suffix} format is not supported.'
- from yolov6.utils.checkpoint import load_checkpoint
- model = load_checkpoint(weights, map_location=device)
- stride = int(model.stride.max())
- self.__dict__.update(locals()) # assign all variables to self
-
- def forward(self, im, val=False):
- y, _ = self.model(im)
- if isinstance(y, np.ndarray):
- y = torch.tensor(y, device=self.device)
- return y
-
-
-class RepBlock(nn.Module):
- '''
- RepBlock is a stage block with rep-style basic block
- '''
- def __init__(self, in_channels, out_channels, n=1, block=RepVGGBlock, basic_block=RepVGGBlock):
- super().__init__()
-
- self.conv1 = block(in_channels, out_channels)
- self.block = nn.Sequential(*(block(out_channels, out_channels) for _ in range(n - 1))) if n > 1 else None
- if block == BottleRep:
- self.conv1 = BottleRep(in_channels, out_channels, basic_block=basic_block, weight=True)
- n = n // 2
- self.block = nn.Sequential(*(BottleRep(out_channels, out_channels, basic_block=basic_block, weight=True) for _ in range(n - 1))) if n > 1 else None
-
- def forward(self, x):
- x = self.conv1(x)
- if self.block is not None:
- x = self.block(x)
- return x
-
-
-class BottleRep(nn.Module):
-
- def __init__(self, in_channels, out_channels, basic_block=RepVGGBlock, weight=False):
- super().__init__()
- self.conv1 = basic_block(in_channels, out_channels)
- self.conv2 = basic_block(out_channels, out_channels)
- if in_channels != out_channels:
- self.shortcut = False
- else:
- self.shortcut = True
- if weight:
- self.alpha = Parameter(torch.ones(1))
- else:
- self.alpha = 1.0
-
- def forward(self, x):
- outputs = self.conv1(x)
- outputs = self.conv2(outputs)
- return outputs + self.alpha * x if self.shortcut else outputs
-
-
-class BottleRep3(nn.Module):
-
- def __init__(self, in_channels, out_channels, basic_block=RepVGGBlock, weight=False):
- super().__init__()
- self.conv1 = basic_block(in_channels, out_channels)
- self.conv2 = basic_block(out_channels, out_channels)
- self.conv3 = basic_block(out_channels, out_channels)
- if in_channels != out_channels:
- self.shortcut = False
- else:
- self.shortcut = True
- if weight:
- self.alpha = Parameter(torch.ones(1))
- else:
- self.alpha = 1.0
-
- def forward(self, x):
- outputs = self.conv1(x)
- outputs = self.conv2(outputs)
- outputs = self.conv3(outputs)
- return outputs + self.alpha * x if self.shortcut else outputs
-
-
-class BepC3(nn.Module):
- '''CSPStackRep Block'''
- def __init__(self, in_channels, out_channels, n=1, e=0.5, block=RepVGGBlock):
- super().__init__()
- c_ = int(out_channels * e) # hidden channels
- self.cv1 = ConvBNReLU(in_channels, c_, 1, 1)
- self.cv2 = ConvBNReLU(in_channels, c_, 1, 1)
- self.cv3 = ConvBNReLU(2 * c_, out_channels, 1, 1)
- if block == ConvBNSiLU:
- self.cv1 = ConvBNSiLU(in_channels, c_, 1, 1)
- self.cv2 = ConvBNSiLU(in_channels, c_, 1, 1)
- self.cv3 = ConvBNSiLU(2 * c_, out_channels, 1, 1)
-
- self.m = RepBlock(in_channels=c_, out_channels=c_, n=n, block=BottleRep, basic_block=block)
-
- def forward(self, x):
- return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
-
-
-class MBLABlock(nn.Module):
- ''' Multi Branch Layer Aggregation Block'''
- def __init__(self, in_channels, out_channels, n=1, e=0.5, block=RepVGGBlock):
- super().__init__()
- n = n // 2
- if n <= 0:
- n = 1
-
- # max add one branch
- if n == 1:
- n_list = [0, 1]
- else:
- extra_branch_steps = 1
- while extra_branch_steps * 2 < n:
- extra_branch_steps *= 2
- n_list = [0, extra_branch_steps, n]
- branch_num = len(n_list)
-
- c_ = int(out_channels * e) # hidden channels
- self.c = c_
- self.cv1 = ConvModule(in_channels, branch_num * self.c, 1, 1, 'relu', bias=False)
- self.cv2 = ConvModule((sum(n_list) + branch_num) * self.c, out_channels, 1, 1,'relu', bias=False)
-
- if block == ConvBNSiLU:
- self.cv1 = ConvModule(in_channels, branch_num * self.c, 1, 1, 'silu', bias=False)
- self.cv2 = ConvModule((sum(n_list) + branch_num) * self.c, out_channels, 1, 1,'silu', bias=False)
-
- self.m = nn.ModuleList()
- for n_list_i in n_list[1:]:
- self.m.append(nn.Sequential(*(BottleRep3(self.c, self.c, basic_block=block, weight=True) for _ in range(n_list_i))))
-
- self.split_num = tuple([self.c]*branch_num)
-
- def forward(self, x):
- y = list(self.cv1(x).split(self.split_num, 1))
- all_y = [y[0]]
- for m_idx, m_i in enumerate(self.m):
- all_y.append(y[m_idx+1])
- all_y.extend(m(all_y[-1]) for m in m_i)
- return self.cv2(torch.cat(all_y, 1))
-
-
-class BiFusion(nn.Module):
- '''BiFusion Block in PAN'''
- def __init__(self, in_channels, out_channels):
- super().__init__()
- self.cv1 = ConvBNReLU(in_channels[0], out_channels, 1, 1)
- self.cv2 = ConvBNReLU(in_channels[1], out_channels, 1, 1)
- self.cv3 = ConvBNReLU(out_channels * 3, out_channels, 1, 1)
-
- self.upsample = Transpose(
- in_channels=out_channels,
- out_channels=out_channels,
- )
- self.downsample = ConvBNReLU(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=3,
- stride=2
- )
-
- def forward(self, x):
- x0 = self.upsample(x[0])
- x1 = self.cv1(x[1])
- x2 = self.downsample(self.cv2(x[2]))
- return self.cv3(torch.cat((x0, x1, x2), dim=1))
-
-
-def get_block(mode):
- if mode == 'repvgg':
- return RepVGGBlock
- elif mode == 'qarepvgg':
- return QARepVGGBlock
- elif mode == 'qarepvggv2':
- return QARepVGGBlockV2
- elif mode == 'hyper_search':
- return LinearAddBlock
- elif mode == 'repopt':
- return RealVGGBlock
- elif mode == 'conv_relu':
- return ConvBNReLU
- elif mode == 'conv_silu':
- return ConvBNSiLU
- else:
- raise NotImplementedError("Undefied Repblock choice for mode {}".format(mode))
-
-
-class SEBlock(nn.Module):
-
- def __init__(self, channel, reduction=4):
- super().__init__()
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- self.conv1 = nn.Conv2d(
- in_channels=channel,
- out_channels=channel // reduction,
- kernel_size=1,
- stride=1,
- padding=0)
- self.relu = nn.ReLU()
- self.conv2 = nn.Conv2d(
- in_channels=channel // reduction,
- out_channels=channel,
- kernel_size=1,
- stride=1,
- padding=0)
- self.hardsigmoid = nn.Hardsigmoid()
-
- def forward(self, x):
- identity = x
- x = self.avg_pool(x)
- x = self.conv1(x)
- x = self.relu(x)
- x = self.conv2(x)
- x = self.hardsigmoid(x)
- out = identity * x
- return out
-
-
-def channel_shuffle(x, groups):
- batchsize, num_channels, height, width = x.data.size()
- channels_per_group = num_channels // groups
- # reshape
- x = x.view(batchsize, groups, channels_per_group, height, width)
- x = torch.transpose(x, 1, 2).contiguous()
- # flatten
- x = x.view(batchsize, -1, height, width)
-
- return x
-
-
-class Lite_EffiBlockS1(nn.Module):
-
- def __init__(self,
- in_channels,
- mid_channels,
- out_channels,
- stride):
- super().__init__()
- self.conv_pw_1 = ConvBNHS(
- in_channels=in_channels // 2,
- out_channels=mid_channels,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=1)
- self.conv_dw_1 = ConvBN(
- in_channels=mid_channels,
- out_channels=mid_channels,
- kernel_size=3,
- stride=stride,
- padding=1,
- groups=mid_channels)
- self.se = SEBlock(mid_channels)
- self.conv_1 = ConvBNHS(
- in_channels=mid_channels,
- out_channels=out_channels // 2,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=1)
- def forward(self, inputs):
- x1, x2 = torch.split(
- inputs,
- split_size_or_sections=[inputs.shape[1] // 2, inputs.shape[1] // 2],
- dim=1)
- x2 = self.conv_pw_1(x2)
- x3 = self.conv_dw_1(x2)
- x3 = self.se(x3)
- x3 = self.conv_1(x3)
- out = torch.cat([x1, x3], axis=1)
- return channel_shuffle(out, 2)
-
-
-class Lite_EffiBlockS2(nn.Module):
-
- def __init__(self,
- in_channels,
- mid_channels,
- out_channels,
- stride):
- super().__init__()
- # branch1
- self.conv_dw_1 = ConvBN(
- in_channels=in_channels,
- out_channels=in_channels,
- kernel_size=3,
- stride=stride,
- padding=1,
- groups=in_channels)
- self.conv_1 = ConvBNHS(
- in_channels=in_channels,
- out_channels=out_channels // 2,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=1)
- # branch2
- self.conv_pw_2 = ConvBNHS(
- in_channels=in_channels,
- out_channels=mid_channels // 2,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=1)
- self.conv_dw_2 = ConvBN(
- in_channels=mid_channels // 2,
- out_channels=mid_channels // 2,
- kernel_size=3,
- stride=stride,
- padding=1,
- groups=mid_channels // 2)
- self.se = SEBlock(mid_channels // 2)
- self.conv_2 = ConvBNHS(
- in_channels=mid_channels // 2,
- out_channels=out_channels // 2,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=1)
- self.conv_dw_3 = ConvBNHS(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- groups=out_channels)
- self.conv_pw_3 = ConvBNHS(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=1)
-
- def forward(self, inputs):
- x1 = self.conv_dw_1(inputs)
- x1 = self.conv_1(x1)
- x2 = self.conv_pw_2(inputs)
- x2 = self.conv_dw_2(x2)
- x2 = self.se(x2)
- x2 = self.conv_2(x2)
- out = torch.cat([x1, x2], axis=1)
- out = self.conv_dw_3(out)
- out = self.conv_pw_3(out)
- return out
-
-
-class DPBlock(nn.Module):
-
- def __init__(self,
- in_channel=96,
- out_channel=96,
- kernel_size=3,
- stride=1):
- super().__init__()
- self.conv_dw_1 = nn.Conv2d(
- in_channels=in_channel,
- out_channels=out_channel,
- kernel_size=kernel_size,
- groups=out_channel,
- padding=(kernel_size - 1) // 2,
- stride=stride)
- self.bn_1 = nn.BatchNorm2d(out_channel)
- self.act_1 = nn.Hardswish()
- self.conv_pw_1 = nn.Conv2d(
- in_channels=out_channel,
- out_channels=out_channel,
- kernel_size=1,
- groups=1,
- padding=0)
- self.bn_2 = nn.BatchNorm2d(out_channel)
- self.act_2 = nn.Hardswish()
-
- def forward(self, x):
- x = self.act_1(self.bn_1(self.conv_dw_1(x)))
- x = self.act_2(self.bn_2(self.conv_pw_1(x)))
- return x
-
- def forward_fuse(self, x):
- x = self.act_1(self.conv_dw_1(x))
- x = self.act_2(self.conv_pw_1(x))
- return x
-
-
-class DarknetBlock(nn.Module):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size=3,
- expansion=0.5):
- super().__init__()
- hidden_channels = int(out_channels * expansion)
- self.conv_1 = ConvBNHS(
- in_channels=in_channels,
- out_channels=hidden_channels,
- kernel_size=1,
- stride=1,
- padding=0)
- self.conv_2 = DPBlock(
- in_channel=hidden_channels,
- out_channel=out_channels,
- kernel_size=kernel_size,
- stride=1)
-
- def forward(self, x):
- out = self.conv_1(x)
- out = self.conv_2(out)
- return out
-
-
-class CSPBlock(nn.Module):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size=3,
- expand_ratio=0.5):
- super().__init__()
- mid_channels = int(out_channels * expand_ratio)
- self.conv_1 = ConvBNHS(in_channels, mid_channels, 1, 1, 0)
- self.conv_2 = ConvBNHS(in_channels, mid_channels, 1, 1, 0)
- self.conv_3 = ConvBNHS(2 * mid_channels, out_channels, 1, 1, 0)
- self.blocks = DarknetBlock(mid_channels,
- mid_channels,
- kernel_size,
- 1.0)
- def forward(self, x):
- x_1 = self.conv_1(x)
- x_1 = self.blocks(x_1)
- x_2 = self.conv_2(x)
- x = torch.cat((x_1, x_2), axis=1)
- x = self.conv_3(x)
- return x
diff --git a/cv/detection/yolov6/pytorch/yolov6/layers/dbb_transforms.py b/cv/detection/yolov6/pytorch/yolov6/layers/dbb_transforms.py
deleted file mode 100644
index cd93d0e23ad459d3cfa8d1a608383bbcb3a0cbfb..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/layers/dbb_transforms.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import torch
-import numpy as np
-import torch.nn.functional as F
-
-
-def transI_fusebn(kernel, bn):
- gamma = bn.weight
- std = (bn.running_var + bn.eps).sqrt()
- return kernel * ((gamma / std).reshape(-1, 1, 1, 1)), bn.bias - bn.running_mean * gamma / std
-
-
-def transII_addbranch(kernels, biases):
- return sum(kernels), sum(biases)
-
-
-def transIII_1x1_kxk(k1, b1, k2, b2, groups):
- if groups == 1:
- k = F.conv2d(k2, k1.permute(1, 0, 2, 3)) #
- b_hat = (k2 * b1.reshape(1, -1, 1, 1)).sum((1, 2, 3))
- else:
- k_slices = []
- b_slices = []
- k1_T = k1.permute(1, 0, 2, 3)
- k1_group_width = k1.size(0) // groups
- k2_group_width = k2.size(0) // groups
- for g in range(groups):
- k1_T_slice = k1_T[:, g*k1_group_width:(g+1)*k1_group_width, :, :]
- k2_slice = k2[g*k2_group_width:(g+1)*k2_group_width, :, :, :]
- k_slices.append(F.conv2d(k2_slice, k1_T_slice))
- b_slices.append((k2_slice * b1[g * k1_group_width:(g+1) * k1_group_width].reshape(1, -1, 1, 1)).sum((1, 2, 3)))
- k, b_hat = transIV_depthconcat(k_slices, b_slices)
- return k, b_hat + b2
-
-
-def transIV_depthconcat(kernels, biases):
- return torch.cat(kernels, dim=0), torch.cat(biases)
-
-
-def transV_avg(channels, kernel_size, groups):
- input_dim = channels // groups
- k = torch.zeros((channels, input_dim, kernel_size, kernel_size))
- k[np.arange(channels), np.tile(np.arange(input_dim), groups), :, :] = 1.0 / kernel_size ** 2
- return k
-
-
-# This has not been tested with non-square kernels (kernel.size(2) != kernel.size(3)) nor even-size kernels
-def transVI_multiscale(kernel, target_kernel_size):
- H_pixels_to_pad = (target_kernel_size - kernel.size(2)) // 2
- W_pixels_to_pad = (target_kernel_size - kernel.size(3)) // 2
- return F.pad(kernel, [H_pixels_to_pad, H_pixels_to_pad, W_pixels_to_pad, W_pixels_to_pad])
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/efficientrep.py b/cv/detection/yolov6/pytorch/yolov6/models/efficientrep.py
deleted file mode 100644
index 5d0de7cea1676847ab8459ae0ae212203119751c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/efficientrep.py
+++ /dev/null
@@ -1,582 +0,0 @@
-from pickle import FALSE
-from torch import nn
-from yolov6.layers.common import BottleRep, RepVGGBlock, RepBlock, BepC3, SimSPPF, SPPF, SimCSPSPPF, CSPSPPF, ConvBNSiLU, \
- MBLABlock, ConvBNHS, Lite_EffiBlockS2, Lite_EffiBlockS1
-
-
-class EfficientRep(nn.Module):
- '''EfficientRep Backbone
- EfficientRep is handcrafted by hardware-aware neural network design.
- With rep-style struct, EfficientRep is friendly to high-computation hardware(e.g. GPU).
- '''
-
- def __init__(
- self,
- in_channels=3,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock,
- fuse_P2=False,
- cspsppf=False
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
- self.fuse_P2 = fuse_P2
-
- self.stem = block(
- in_channels=in_channels,
- out_channels=channels_list[0],
- kernel_size=3,
- stride=2
- )
-
- self.ERBlock_2 = nn.Sequential(
- block(
- in_channels=channels_list[0],
- out_channels=channels_list[1],
- kernel_size=3,
- stride=2
- ),
- RepBlock(
- in_channels=channels_list[1],
- out_channels=channels_list[1],
- n=num_repeats[1],
- block=block,
- )
- )
-
- self.ERBlock_3 = nn.Sequential(
- block(
- in_channels=channels_list[1],
- out_channels=channels_list[2],
- kernel_size=3,
- stride=2
- ),
- RepBlock(
- in_channels=channels_list[2],
- out_channels=channels_list[2],
- n=num_repeats[2],
- block=block,
- )
- )
-
- self.ERBlock_4 = nn.Sequential(
- block(
- in_channels=channels_list[2],
- out_channels=channels_list[3],
- kernel_size=3,
- stride=2
- ),
- RepBlock(
- in_channels=channels_list[3],
- out_channels=channels_list[3],
- n=num_repeats[3],
- block=block,
- )
- )
-
- channel_merge_layer = SPPF if block == ConvBNSiLU else SimSPPF
- if cspsppf:
- channel_merge_layer = CSPSPPF if block == ConvBNSiLU else SimCSPSPPF
-
- self.ERBlock_5 = nn.Sequential(
- block(
- in_channels=channels_list[3],
- out_channels=channels_list[4],
- kernel_size=3,
- stride=2,
- ),
- RepBlock(
- in_channels=channels_list[4],
- out_channels=channels_list[4],
- n=num_repeats[4],
- block=block,
- ),
- channel_merge_layer(
- in_channels=channels_list[4],
- out_channels=channels_list[4],
- kernel_size=5
- )
- )
-
- def forward(self, x):
-
- outputs = []
- x = self.stem(x)
- x = self.ERBlock_2(x)
- if self.fuse_P2:
- outputs.append(x)
- x = self.ERBlock_3(x)
- outputs.append(x)
- x = self.ERBlock_4(x)
- outputs.append(x)
- x = self.ERBlock_5(x)
- outputs.append(x)
-
- return tuple(outputs)
-
-
-class EfficientRep6(nn.Module):
- '''EfficientRep+P6 Backbone
- EfficientRep is handcrafted by hardware-aware neural network design.
- With rep-style struct, EfficientRep is friendly to high-computation hardware(e.g. GPU).
- '''
-
- def __init__(
- self,
- in_channels=3,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock,
- fuse_P2=False,
- cspsppf=False
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
- self.fuse_P2 = fuse_P2
-
- self.stem = block(
- in_channels=in_channels,
- out_channels=channels_list[0],
- kernel_size=3,
- stride=2
- )
-
- self.ERBlock_2 = nn.Sequential(
- block(
- in_channels=channels_list[0],
- out_channels=channels_list[1],
- kernel_size=3,
- stride=2
- ),
- RepBlock(
- in_channels=channels_list[1],
- out_channels=channels_list[1],
- n=num_repeats[1],
- block=block,
- )
- )
-
- self.ERBlock_3 = nn.Sequential(
- block(
- in_channels=channels_list[1],
- out_channels=channels_list[2],
- kernel_size=3,
- stride=2
- ),
- RepBlock(
- in_channels=channels_list[2],
- out_channels=channels_list[2],
- n=num_repeats[2],
- block=block,
- )
- )
-
- self.ERBlock_4 = nn.Sequential(
- block(
- in_channels=channels_list[2],
- out_channels=channels_list[3],
- kernel_size=3,
- stride=2
- ),
- RepBlock(
- in_channels=channels_list[3],
- out_channels=channels_list[3],
- n=num_repeats[3],
- block=block,
- )
- )
-
- self.ERBlock_5 = nn.Sequential(
- block(
- in_channels=channels_list[3],
- out_channels=channels_list[4],
- kernel_size=3,
- stride=2,
- ),
- RepBlock(
- in_channels=channels_list[4],
- out_channels=channels_list[4],
- n=num_repeats[4],
- block=block,
- )
- )
-
- channel_merge_layer = SimSPPF if not cspsppf else SimCSPSPPF
-
- self.ERBlock_6 = nn.Sequential(
- block(
- in_channels=channels_list[4],
- out_channels=channels_list[5],
- kernel_size=3,
- stride=2,
- ),
- RepBlock(
- in_channels=channels_list[5],
- out_channels=channels_list[5],
- n=num_repeats[5],
- block=block,
- ),
- channel_merge_layer(
- in_channels=channels_list[5],
- out_channels=channels_list[5],
- kernel_size=5
- )
- )
-
- def forward(self, x):
-
- outputs = []
- x = self.stem(x)
- x = self.ERBlock_2(x)
- if self.fuse_P2:
- outputs.append(x)
- x = self.ERBlock_3(x)
- outputs.append(x)
- x = self.ERBlock_4(x)
- outputs.append(x)
- x = self.ERBlock_5(x)
- outputs.append(x)
- x = self.ERBlock_6(x)
- outputs.append(x)
-
- return tuple(outputs)
-
-
-class CSPBepBackbone(nn.Module):
- """
- CSPBepBackbone module.
- """
-
- def __init__(
- self,
- in_channels=3,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock,
- csp_e=float(1)/2,
- fuse_P2=False,
- cspsppf=False,
- stage_block_type="BepC3"
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- if stage_block_type == "BepC3":
- stage_block = BepC3
- elif stage_block_type == "MBLABlock":
- stage_block = MBLABlock
- else:
- raise NotImplementedError
-
- self.fuse_P2 = fuse_P2
-
- self.stem = block(
- in_channels=in_channels,
- out_channels=channels_list[0],
- kernel_size=3,
- stride=2
- )
-
- self.ERBlock_2 = nn.Sequential(
- block(
- in_channels=channels_list[0],
- out_channels=channels_list[1],
- kernel_size=3,
- stride=2
- ),
- stage_block(
- in_channels=channels_list[1],
- out_channels=channels_list[1],
- n=num_repeats[1],
- e=csp_e,
- block=block,
- )
- )
-
- self.ERBlock_3 = nn.Sequential(
- block(
- in_channels=channels_list[1],
- out_channels=channels_list[2],
- kernel_size=3,
- stride=2
- ),
- stage_block(
- in_channels=channels_list[2],
- out_channels=channels_list[2],
- n=num_repeats[2],
- e=csp_e,
- block=block,
- )
- )
-
- self.ERBlock_4 = nn.Sequential(
- block(
- in_channels=channels_list[2],
- out_channels=channels_list[3],
- kernel_size=3,
- stride=2
- ),
- stage_block(
- in_channels=channels_list[3],
- out_channels=channels_list[3],
- n=num_repeats[3],
- e=csp_e,
- block=block,
- )
- )
-
- channel_merge_layer = SPPF if block == ConvBNSiLU else SimSPPF
- if cspsppf:
- channel_merge_layer = CSPSPPF if block == ConvBNSiLU else SimCSPSPPF
-
- self.ERBlock_5 = nn.Sequential(
- block(
- in_channels=channels_list[3],
- out_channels=channels_list[4],
- kernel_size=3,
- stride=2,
- ),
- stage_block(
- in_channels=channels_list[4],
- out_channels=channels_list[4],
- n=num_repeats[4],
- e=csp_e,
- block=block,
- ),
- channel_merge_layer(
- in_channels=channels_list[4],
- out_channels=channels_list[4],
- kernel_size=5
- )
- )
-
- def forward(self, x):
-
- outputs = []
- x = self.stem(x)
- x = self.ERBlock_2(x)
- if self.fuse_P2:
- outputs.append(x)
- x = self.ERBlock_3(x)
- outputs.append(x)
- x = self.ERBlock_4(x)
- outputs.append(x)
- x = self.ERBlock_5(x)
- outputs.append(x)
-
- return tuple(outputs)
-
-
-class CSPBepBackbone_P6(nn.Module):
- """
- CSPBepBackbone+P6 module.
- """
-
- def __init__(
- self,
- in_channels=3,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock,
- csp_e=float(1)/2,
- fuse_P2=False,
- cspsppf=False,
- stage_block_type="BepC3"
- ):
- super().__init__()
- assert channels_list is not None
- assert num_repeats is not None
-
- if stage_block_type == "BepC3":
- stage_block = BepC3
- elif stage_block_type == "MBLABlock":
- stage_block = MBLABlock
- else:
- raise NotImplementedError
-
- self.fuse_P2 = fuse_P2
-
- self.stem = block(
- in_channels=in_channels,
- out_channels=channels_list[0],
- kernel_size=3,
- stride=2
- )
-
- self.ERBlock_2 = nn.Sequential(
- block(
- in_channels=channels_list[0],
- out_channels=channels_list[1],
- kernel_size=3,
- stride=2
- ),
- stage_block(
- in_channels=channels_list[1],
- out_channels=channels_list[1],
- n=num_repeats[1],
- e=csp_e,
- block=block,
- )
- )
-
- self.ERBlock_3 = nn.Sequential(
- block(
- in_channels=channels_list[1],
- out_channels=channels_list[2],
- kernel_size=3,
- stride=2
- ),
- stage_block(
- in_channels=channels_list[2],
- out_channels=channels_list[2],
- n=num_repeats[2],
- e=csp_e,
- block=block,
- )
- )
-
- self.ERBlock_4 = nn.Sequential(
- block(
- in_channels=channels_list[2],
- out_channels=channels_list[3],
- kernel_size=3,
- stride=2
- ),
- stage_block(
- in_channels=channels_list[3],
- out_channels=channels_list[3],
- n=num_repeats[3],
- e=csp_e,
- block=block,
- )
- )
-
- channel_merge_layer = SPPF if block == ConvBNSiLU else SimSPPF
- if cspsppf:
- channel_merge_layer = CSPSPPF if block == ConvBNSiLU else SimCSPSPPF
-
- self.ERBlock_5 = nn.Sequential(
- block(
- in_channels=channels_list[3],
- out_channels=channels_list[4],
- kernel_size=3,
- stride=2,
- ),
- stage_block(
- in_channels=channels_list[4],
- out_channels=channels_list[4],
- n=num_repeats[4],
- e=csp_e,
- block=block,
- ),
- )
- self.ERBlock_6 = nn.Sequential(
- block(
- in_channels=channels_list[4],
- out_channels=channels_list[5],
- kernel_size=3,
- stride=2,
- ),
- stage_block(
- in_channels=channels_list[5],
- out_channels=channels_list[5],
- n=num_repeats[5],
- e=csp_e,
- block=block,
- ),
- channel_merge_layer(
- in_channels=channels_list[5],
- out_channels=channels_list[5],
- kernel_size=5
- )
- )
-
- def forward(self, x):
-
- outputs = []
- x = self.stem(x)
- x = self.ERBlock_2(x)
- outputs.append(x)
- x = self.ERBlock_3(x)
- outputs.append(x)
- x = self.ERBlock_4(x)
- outputs.append(x)
- x = self.ERBlock_5(x)
- outputs.append(x)
- x = self.ERBlock_6(x)
- outputs.append(x)
-
- return tuple(outputs)
-
-class Lite_EffiBackbone(nn.Module):
- def __init__(self,
- in_channels,
- mid_channels,
- out_channels,
- num_repeat=[1, 3, 7, 3]
- ):
- super().__init__()
- out_channels[0]=24
- self.conv_0 = ConvBNHS(in_channels=in_channels,
- out_channels=out_channels[0],
- kernel_size=3,
- stride=2,
- padding=1)
-
- self.lite_effiblock_1 = self.build_block(num_repeat[0],
- out_channels[0],
- mid_channels[1],
- out_channels[1])
-
- self.lite_effiblock_2 = self.build_block(num_repeat[1],
- out_channels[1],
- mid_channels[2],
- out_channels[2])
-
- self.lite_effiblock_3 = self.build_block(num_repeat[2],
- out_channels[2],
- mid_channels[3],
- out_channels[3])
-
- self.lite_effiblock_4 = self.build_block(num_repeat[3],
- out_channels[3],
- mid_channels[4],
- out_channels[4])
-
- def forward(self, x):
- outputs = []
- x = self.conv_0(x)
- x = self.lite_effiblock_1(x)
- x = self.lite_effiblock_2(x)
- outputs.append(x)
- x = self.lite_effiblock_3(x)
- outputs.append(x)
- x = self.lite_effiblock_4(x)
- outputs.append(x)
- return tuple(outputs)
-
- @staticmethod
- def build_block(num_repeat, in_channels, mid_channels, out_channels):
- block_list = nn.Sequential()
- for i in range(num_repeat):
- if i == 0:
- block = Lite_EffiBlockS2(
- in_channels=in_channels,
- mid_channels=mid_channels,
- out_channels=out_channels,
- stride=2)
- else:
- block = Lite_EffiBlockS1(
- in_channels=out_channels,
- mid_channels=mid_channels,
- out_channels=out_channels,
- stride=1)
- block_list.add_module(str(i), block)
- return block_list
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/effidehead.py b/cv/detection/yolov6/pytorch/yolov6/models/effidehead.py
deleted file mode 100644
index 55b7b0697f9bd037351b22a22ed5769ec2c5a449..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/effidehead.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import math
-from yolov6.layers.common import *
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox
-
-
-class Detect(nn.Module):
- export = False
- '''Efficient Decoupled Head
- With hardware-aware degisn, the decoupled head is optimized with
- hybridchannels methods.
- '''
- def __init__(self, num_classes=80, num_layers=3, inplace=True, head_layers=None, use_dfl=True, reg_max=16): # detection layer
- super().__init__()
- assert head_layers is not None
- self.nc = num_classes # number of classes
- self.no = num_classes + 5 # number of outputs per anchor
- self.nl = num_layers # number of detection layers
- self.grid = [torch.zeros(1)] * num_layers
- self.prior_prob = 1e-2
- self.inplace = inplace
- stride = [8, 16, 32] if num_layers == 3 else [8, 16, 32, 64] # strides computed during build
- self.stride = torch.tensor(stride)
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj_conv = nn.Conv2d(self.reg_max + 1, 1, 1, bias=False)
- self.grid_cell_offset = 0.5
- self.grid_cell_size = 5.0
-
- # Init decouple head
- self.stems = nn.ModuleList()
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- self.cls_preds = nn.ModuleList()
- self.reg_preds = nn.ModuleList()
-
- # Efficient decoupled head layers
- for i in range(num_layers):
- idx = i*5
- self.stems.append(head_layers[idx])
- self.cls_convs.append(head_layers[idx+1])
- self.reg_convs.append(head_layers[idx+2])
- self.cls_preds.append(head_layers[idx+3])
- self.reg_preds.append(head_layers[idx+4])
-
- def initialize_biases(self):
-
- for conv in self.cls_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(-math.log((1 - self.prior_prob) / self.prior_prob))
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.reg_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(1.0)
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.proj_conv.weight = nn.Parameter(self.proj.view([1, self.reg_max + 1, 1, 1]).clone().detach(),
- requires_grad=False)
-
- def forward(self, x):
- if self.training:
- cls_score_list = []
- reg_distri_list = []
-
- for i in range(self.nl):
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
- cls_feat = self.cls_convs[i](cls_x)
- cls_output = self.cls_preds[i](cls_feat)
- reg_feat = self.reg_convs[i](reg_x)
- reg_output = self.reg_preds[i](reg_feat)
-
- cls_output = torch.sigmoid(cls_output)
- cls_score_list.append(cls_output.flatten(2).permute((0, 2, 1)))
- reg_distri_list.append(reg_output.flatten(2).permute((0, 2, 1)))
-
- cls_score_list = torch.cat(cls_score_list, axis=1)
- reg_distri_list = torch.cat(reg_distri_list, axis=1)
-
- return x, cls_score_list, reg_distri_list
- else:
- cls_score_list = []
- reg_dist_list = []
-
- for i in range(self.nl):
- b, _, h, w = x[i].shape
- l = h * w
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
- cls_feat = self.cls_convs[i](cls_x)
- cls_output = self.cls_preds[i](cls_feat)
- reg_feat = self.reg_convs[i](reg_x)
- reg_output = self.reg_preds[i](reg_feat)
-
- if self.use_dfl:
- reg_output = reg_output.reshape([-1, 4, self.reg_max + 1, l]).permute(0, 2, 1, 3)
- reg_output = self.proj_conv(F.softmax(reg_output, dim=1))
-
- cls_output = torch.sigmoid(cls_output)
-
- if self.export:
- cls_score_list.append(cls_output)
- reg_dist_list.append(reg_output)
- else:
- cls_score_list.append(cls_output.reshape([b, self.nc, l]))
- reg_dist_list.append(reg_output.reshape([b, 4, l]))
-
- if self.export:
- return tuple(torch.cat([cls, reg], 1) for cls, reg in zip(cls_score_list, reg_dist_list))
-
- cls_score_list = torch.cat(cls_score_list, axis=-1).permute(0, 2, 1)
- reg_dist_list = torch.cat(reg_dist_list, axis=-1).permute(0, 2, 1)
-
-
- anchor_points, stride_tensor = generate_anchors(
- x, self.stride, self.grid_cell_size, self.grid_cell_offset, device=x[0].device, is_eval=True, mode='af')
-
- pred_bboxes = dist2bbox(reg_dist_list, anchor_points, box_format='xywh')
- pred_bboxes *= stride_tensor
- return torch.cat(
- [
- pred_bboxes,
- torch.ones((b, pred_bboxes.shape[1], 1), device=pred_bboxes.device, dtype=pred_bboxes.dtype),
- cls_score_list
- ],
- axis=-1)
-
-
-def build_effidehead_layer(channels_list, num_anchors, num_classes, reg_max=16, num_layers=3):
-
- chx = [6, 8, 10] if num_layers == 3 else [8, 9, 10, 11]
-
- head_layers = nn.Sequential(
- # stem0
- ConvBNSiLU(
- in_channels=channels_list[chx[0]],
- out_channels=channels_list[chx[0]],
- kernel_size=1,
- stride=1
- ),
- # cls_conv0
- ConvBNSiLU(
- in_channels=channels_list[chx[0]],
- out_channels=channels_list[chx[0]],
- kernel_size=3,
- stride=1
- ),
- # reg_conv0
- ConvBNSiLU(
- in_channels=channels_list[chx[0]],
- out_channels=channels_list[chx[0]],
- kernel_size=3,
- stride=1
- ),
- # cls_pred0
- nn.Conv2d(
- in_channels=channels_list[chx[0]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred0
- nn.Conv2d(
- in_channels=channels_list[chx[0]],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- ),
- # stem1
- ConvBNSiLU(
- in_channels=channels_list[chx[1]],
- out_channels=channels_list[chx[1]],
- kernel_size=1,
- stride=1
- ),
- # cls_conv1
- ConvBNSiLU(
- in_channels=channels_list[chx[1]],
- out_channels=channels_list[chx[1]],
- kernel_size=3,
- stride=1
- ),
- # reg_conv1
- ConvBNSiLU(
- in_channels=channels_list[chx[1]],
- out_channels=channels_list[chx[1]],
- kernel_size=3,
- stride=1
- ),
- # cls_pred1
- nn.Conv2d(
- in_channels=channels_list[chx[1]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred1
- nn.Conv2d(
- in_channels=channels_list[chx[1]],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- ),
- # stem2
- ConvBNSiLU(
- in_channels=channels_list[chx[2]],
- out_channels=channels_list[chx[2]],
- kernel_size=1,
- stride=1
- ),
- # cls_conv2
- ConvBNSiLU(
- in_channels=channels_list[chx[2]],
- out_channels=channels_list[chx[2]],
- kernel_size=3,
- stride=1
- ),
- # reg_conv2
- ConvBNSiLU(
- in_channels=channels_list[chx[2]],
- out_channels=channels_list[chx[2]],
- kernel_size=3,
- stride=1
- ),
- # cls_pred2
- nn.Conv2d(
- in_channels=channels_list[chx[2]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred2
- nn.Conv2d(
- in_channels=channels_list[chx[2]],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- )
- )
-
- if num_layers == 4:
- head_layers.add_module('stem3',
- # stem3
- ConvBNSiLU(
- in_channels=channels_list[chx[3]],
- out_channels=channels_list[chx[3]],
- kernel_size=1,
- stride=1
- )
- )
- head_layers.add_module('cls_conv3',
- # cls_conv3
- ConvBNSiLU(
- in_channels=channels_list[chx[3]],
- out_channels=channels_list[chx[3]],
- kernel_size=3,
- stride=1
- )
- )
- head_layers.add_module('reg_conv3',
- # reg_conv3
- ConvBNSiLU(
- in_channels=channels_list[chx[3]],
- out_channels=channels_list[chx[3]],
- kernel_size=3,
- stride=1
- )
- )
- head_layers.add_module('cls_pred3',
- # cls_pred3
- nn.Conv2d(
- in_channels=channels_list[chx[3]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- )
- )
- head_layers.add_module('reg_pred3',
- # reg_pred3
- nn.Conv2d(
- in_channels=channels_list[chx[3]],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- )
- )
-
- return head_layers
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/end2end.py b/cv/detection/yolov6/pytorch/yolov6/models/end2end.py
deleted file mode 100644
index c1f102ba6e8c612ec50260e6cc483deec0e14cce..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/end2end.py
+++ /dev/null
@@ -1,282 +0,0 @@
-import torch
-import torch.nn as nn
-import random
-
-
-class ORT_NMS(torch.autograd.Function):
- '''ONNX-Runtime NMS operation'''
- @staticmethod
- def forward(ctx,
- boxes,
- scores,
- max_output_boxes_per_class=torch.tensor([100]),
- iou_threshold=torch.tensor([0.45]),
- score_threshold=torch.tensor([0.25])):
- device = boxes.device
- batch = scores.shape[0]
- num_det = random.randint(0, 100)
- batches = torch.randint(0, batch, (num_det,)).sort()[0].to(device)
- idxs = torch.arange(100, 100 + num_det).to(device)
- zeros = torch.zeros((num_det,), dtype=torch.int64).to(device)
- selected_indices = torch.cat([batches[None], zeros[None], idxs[None]], 0).T.contiguous()
- selected_indices = selected_indices.to(torch.int64)
- return selected_indices
-
- @staticmethod
- def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold):
- return g.op("NonMaxSuppression", boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold)
-
-
-class TRT8_NMS(torch.autograd.Function):
- '''TensorRT NMS operation'''
- @staticmethod
- def forward(
- ctx,
- boxes,
- scores,
- background_class=-1,
- box_coding=1,
- iou_threshold=0.45,
- max_output_boxes=100,
- plugin_version="1",
- score_activation=0,
- score_threshold=0.25,
- ):
- batch_size, num_boxes, num_classes = scores.shape
- num_det = torch.randint(0, max_output_boxes, (batch_size, 1), dtype=torch.int32)
- det_boxes = torch.randn(batch_size, max_output_boxes, 4)
- det_scores = torch.randn(batch_size, max_output_boxes)
- det_classes = torch.randint(0, num_classes, (batch_size, max_output_boxes), dtype=torch.int32)
- return num_det, det_boxes, det_scores, det_classes
-
- @staticmethod
- def symbolic(g,
- boxes,
- scores,
- background_class=-1,
- box_coding=1,
- iou_threshold=0.45,
- max_output_boxes=100,
- plugin_version="1",
- score_activation=0,
- score_threshold=0.25):
- out = g.op("TRT::EfficientNMS_TRT",
- boxes,
- scores,
- background_class_i=background_class,
- box_coding_i=box_coding,
- iou_threshold_f=iou_threshold,
- max_output_boxes_i=max_output_boxes,
- plugin_version_s=plugin_version,
- score_activation_i=score_activation,
- score_threshold_f=score_threshold,
- outputs=4)
- nums, boxes, scores, classes = out
- return nums, boxes, scores, classes
-
-class TRT7_NMS(torch.autograd.Function):
- '''TensorRT NMS operation'''
- @staticmethod
- def forward(
- ctx,
- boxes,
- scores,
- plugin_version="1",
- shareLocation=1,
- backgroundLabelId=-1,
- numClasses=80,
- topK=1000,
- keepTopK=100,
- scoreThreshold=0.25,
- iouThreshold=0.45,
- isNormalized=0,
- clipBoxes=0,
- scoreBits=16,
- caffeSemantics=1,
- ):
- batch_size, num_boxes, numClasses = scores.shape
- num_det = torch.randint(0, keepTopK, (batch_size, 1), dtype=torch.int32)
- det_boxes = torch.randn(batch_size, keepTopK, 4)
- det_scores = torch.randn(batch_size, keepTopK)
- det_classes = torch.randint(0, numClasses, (batch_size, keepTopK)).float()
- return num_det, det_boxes, det_scores, det_classes
- @staticmethod
- def symbolic(g,
- boxes,
- scores,
- plugin_version='1',
- shareLocation=1,
- backgroundLabelId=-1,
- numClasses=80,
- topK=1000,
- keepTopK=100,
- scoreThreshold=0.25,
- iouThreshold=0.45,
- isNormalized=0,
- clipBoxes=0,
- scoreBits=16,
- caffeSemantics=1,
- ):
- out = g.op("TRT::BatchedNMSDynamic_TRT", # BatchedNMS_TRT BatchedNMSDynamic_TRT
- boxes,
- scores,
- shareLocation_i=shareLocation,
- plugin_version_s=plugin_version,
- backgroundLabelId_i=backgroundLabelId,
- numClasses_i=numClasses,
- topK_i=topK,
- keepTopK_i=keepTopK,
- scoreThreshold_f=scoreThreshold,
- iouThreshold_f=iouThreshold,
- isNormalized_i=isNormalized,
- clipBoxes_i=clipBoxes,
- scoreBits_i=scoreBits,
- caffeSemantics_i=caffeSemantics,
- outputs=4)
- nums, boxes, scores, classes = out
- return nums, boxes, scores, classes
-
-
-class ONNX_ORT(nn.Module):
- '''onnx module with ONNX-Runtime NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, device=None):
- super().__init__()
- self.device = device if device else torch.device("cpu")
- self.max_obj = torch.tensor([max_obj]).to(device)
- self.iou_threshold = torch.tensor([iou_thres]).to(device)
- self.score_threshold = torch.tensor([score_thres]).to(device)
- self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=self.device)
-
- def forward(self, x):
- batch, anchors, _ = x.shape
- box = x[:, :, :4]
- conf = x[:, :, 4:5]
- score = x[:, :, 5:]
- score *= conf
-
- nms_box = box @ self.convert_matrix
- nms_score = score.transpose(1, 2).contiguous()
-
- selected_indices = ORT_NMS.apply(nms_box, nms_score, self.max_obj, self.iou_threshold, self.score_threshold)
- batch_inds, cls_inds, box_inds = selected_indices.unbind(1)
- selected_score = nms_score[batch_inds, cls_inds, box_inds].unsqueeze(1)
- selected_box = nms_box[batch_inds, box_inds, ...]
-
- dets = torch.cat([selected_box, selected_score], dim=1)
-
- batched_dets = dets.unsqueeze(0).repeat(batch, 1, 1)
- batch_template = torch.arange(0, batch, dtype=batch_inds.dtype, device=batch_inds.device)
- batched_dets = batched_dets.where((batch_inds == batch_template.unsqueeze(1)).unsqueeze(-1),batched_dets.new_zeros(1))
-
- batched_labels = cls_inds.unsqueeze(0).repeat(batch, 1)
- batched_labels = batched_labels.where((batch_inds == batch_template.unsqueeze(1)),batched_labels.new_ones(1) * -1)
-
- N = batched_dets.shape[0]
-
- batched_dets = torch.cat((batched_dets, batched_dets.new_zeros((N, 1, 5))), 1)
- batched_labels = torch.cat((batched_labels, -batched_labels.new_ones((N, 1))), 1)
-
- _, topk_inds = batched_dets[:, :, -1].sort(dim=1, descending=True)
-
- topk_batch_inds = torch.arange(batch, dtype=topk_inds.dtype, device=topk_inds.device).view(-1, 1)
- batched_dets = batched_dets[topk_batch_inds, topk_inds, ...]
- det_classes = batched_labels[topk_batch_inds, topk_inds, ...]
- det_boxes, det_scores = batched_dets.split((4, 1), -1)
- det_scores = det_scores.squeeze(-1)
- num_det = (det_scores > 0).sum(1, keepdim=True)
- return num_det, det_boxes, det_scores, det_classes
-
-class ONNX_TRT7(nn.Module):
- '''onnx module with TensorRT NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, device=None):
- super().__init__()
- self.device = device if device else torch.device('cpu')
- self.shareLocation = 1
- self.backgroundLabelId = -1
- self.numClasses = 80
- self.topK = 1000
- self.keepTopK = max_obj
- self.scoreThreshold = score_thres
- self.iouThreshold = iou_thres
- self.isNormalized = 0
- self.clipBoxes = 0
- self.scoreBits = 16
- self.caffeSemantics = 1
- self.plugin_version = '1'
- self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=self.device)
- def forward(self, x):
- box = x[:, :, :4]
- conf = x[:, :, 4:5]
- score = x[:, :, 5:]
- score *= conf
- box @= self.convert_matrix
- box = box.unsqueeze(2)
- self.numClasses = int(score.shape[2])
- num_det, det_boxes, det_scores, det_classes = TRT7_NMS.apply(box, score, self.plugin_version,
- self.shareLocation,
- self.backgroundLabelId,
- self.numClasses,
- self.topK,
- self.keepTopK,
- self.scoreThreshold,
- self.iouThreshold,
- self.isNormalized,
- self.clipBoxes,
- self.scoreBits,
- self.caffeSemantics,
- )
- return num_det, det_boxes, det_scores, det_classes.int()
-
-
-class ONNX_TRT8(nn.Module):
- '''onnx module with TensorRT NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, device=None):
- super().__init__()
- self.device = device if device else torch.device('cpu')
- self.background_class = -1,
- self.box_coding = 1,
- self.iou_threshold = iou_thres
- self.max_obj = max_obj
- self.plugin_version = '1'
- self.score_activation = 0
- self.score_threshold = score_thres
-
- def forward(self, x):
- box = x[:, :, :4]
- conf = x[:, :, 4:5]
- score = x[:, :, 5:]
- score *= conf
- num_det, det_boxes, det_scores, det_classes = TRT8_NMS.apply(box, score, self.background_class, self.box_coding,
- self.iou_threshold, self.max_obj,
- self.plugin_version, self.score_activation,
- self.score_threshold)
- return num_det, det_boxes, det_scores, det_classes
-
-
-class End2End(nn.Module):
- '''export onnx or tensorrt model with NMS operation.'''
- def __init__(self, model, max_obj=100, iou_thres=0.45, score_thres=0.25, device=None, ort=False, trt_version=8, with_preprocess=False):
- super().__init__()
- device = device if device else torch.device('cpu')
- self.with_preprocess = with_preprocess
- self.model = model.to(device)
- TRT = ONNX_TRT8 if trt_version >= 8 else ONNX_TRT7
- self.patch_model = ONNX_ORT if ort else TRT
- self.end2end = self.patch_model(max_obj, iou_thres, score_thres, device)
- self.end2end.eval()
-
- def forward(self, x):
- if self.with_preprocess:
- x = x[:,[2,1,0],...]
- x = x * (1/255)
- x = self.model(x)
- if isinstance(x, list):
- x = x[0]
- else:
- x = x
- x = self.end2end(x)
- return x
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_distill_ns.py b/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_distill_ns.py
deleted file mode 100644
index 912bd6c6ad46896ba2adb95045d9859c2581c9d1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_distill_ns.py
+++ /dev/null
@@ -1,270 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import math
-from yolov6.layers.common import *
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox
-
-
-class Detect(nn.Module):
- export = False
- '''Efficient Decoupled Head for Cost-free Distillation.(FOR NANO/SMALL MODEL)
- '''
- def __init__(self, num_classes=80, num_layers=3, inplace=True, head_layers=None, use_dfl=True, reg_max=16): # detection layer
- super().__init__()
- assert head_layers is not None
- self.nc = num_classes # number of classes
- self.no = num_classes + 5 # number of outputs per anchor
- self.nl = num_layers # number of detection layers
- self.grid = [torch.zeros(1)] * num_layers
- self.prior_prob = 1e-2
- self.inplace = inplace
- stride = [8, 16, 32] # strides computed during build
- self.stride = torch.tensor(stride)
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj_conv = nn.Conv2d(self.reg_max + 1, 1, 1, bias=False)
- self.grid_cell_offset = 0.5
- self.grid_cell_size = 5.0
-
- # Init decouple head
- self.stems = nn.ModuleList()
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- self.cls_preds = nn.ModuleList()
- self.reg_preds_dist = nn.ModuleList()
- self.reg_preds = nn.ModuleList()
-
- # Efficient decoupled head layers
- for i in range(num_layers):
- idx = i*6
- self.stems.append(head_layers[idx])
- self.cls_convs.append(head_layers[idx+1])
- self.reg_convs.append(head_layers[idx+2])
- self.cls_preds.append(head_layers[idx+3])
- self.reg_preds_dist.append(head_layers[idx+4])
- self.reg_preds.append(head_layers[idx+5])
-
- def initialize_biases(self):
-
- for conv in self.cls_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(-math.log((1 - self.prior_prob) / self.prior_prob))
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.reg_preds_dist:
- b = conv.bias.view(-1, )
- b.data.fill_(1.0)
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.reg_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(1.0)
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.proj_conv.weight = nn.Parameter(self.proj.view([1, self.reg_max + 1, 1, 1]).clone().detach(),
- requires_grad=False)
-
- def forward(self, x):
- if self.training:
- cls_score_list = []
- reg_distri_list = []
- reg_lrtb_list = []
-
- for i in range(self.nl):
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
- cls_feat = self.cls_convs[i](cls_x)
- cls_output = self.cls_preds[i](cls_feat)
- reg_feat = self.reg_convs[i](reg_x)
- reg_output = self.reg_preds_dist[i](reg_feat)
- reg_output_lrtb = self.reg_preds[i](reg_feat)
-
- cls_output = torch.sigmoid(cls_output)
- cls_score_list.append(cls_output.flatten(2).permute((0, 2, 1)))
- reg_distri_list.append(reg_output.flatten(2).permute((0, 2, 1)))
- reg_lrtb_list.append(reg_output_lrtb.flatten(2).permute((0, 2, 1)))
-
- cls_score_list = torch.cat(cls_score_list, axis=1)
- reg_distri_list = torch.cat(reg_distri_list, axis=1)
- reg_lrtb_list = torch.cat(reg_lrtb_list, axis=1)
-
- return x, cls_score_list, reg_distri_list, reg_lrtb_list
- else:
- cls_score_list = []
- reg_lrtb_list = []
-
- for i in range(self.nl):
- b, _, h, w = x[i].shape
- l = h * w
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
- cls_feat = self.cls_convs[i](cls_x)
- cls_output = self.cls_preds[i](cls_feat)
- reg_feat = self.reg_convs[i](reg_x)
- reg_output_lrtb = self.reg_preds[i](reg_feat)
-
- cls_output = torch.sigmoid(cls_output)
-
- if self.export:
- cls_score_list.append(cls_output)
- reg_lrtb_list.append(reg_output_lrtb)
- else:
- cls_score_list.append(cls_output.reshape([b, self.nc, l]))
- reg_lrtb_list.append(reg_output_lrtb.reshape([b, 4, l]))
-
- if self.export:
- return tuple(torch.cat([cls, reg], 1) for cls, reg in zip(cls_score_list, reg_lrtb_list))
-
- cls_score_list = torch.cat(cls_score_list, axis=-1).permute(0, 2, 1)
- reg_lrtb_list = torch.cat(reg_lrtb_list, axis=-1).permute(0, 2, 1)
-
-
- anchor_points, stride_tensor = generate_anchors(
- x, self.stride, self.grid_cell_size, self.grid_cell_offset, device=x[0].device, is_eval=True, mode='af')
-
- pred_bboxes = dist2bbox(reg_lrtb_list, anchor_points, box_format='xywh')
- pred_bboxes *= stride_tensor
- return torch.cat(
- [
- pred_bboxes,
- torch.ones((b, pred_bboxes.shape[1], 1), device=pred_bboxes.device, dtype=pred_bboxes.dtype),
- cls_score_list
- ],
- axis=-1)
-
-
-def build_effidehead_layer(channels_list, num_anchors, num_classes, reg_max=16):
- head_layers = nn.Sequential(
- # stem0
- ConvBNSiLU(
- in_channels=channels_list[6],
- out_channels=channels_list[6],
- kernel_size=1,
- stride=1
- ),
- # cls_conv0
- ConvBNSiLU(
- in_channels=channels_list[6],
- out_channels=channels_list[6],
- kernel_size=3,
- stride=1
- ),
- # reg_conv0
- ConvBNSiLU(
- in_channels=channels_list[6],
- out_channels=channels_list[6],
- kernel_size=3,
- stride=1
- ),
- # cls_pred0
- nn.Conv2d(
- in_channels=channels_list[6],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred0
- nn.Conv2d(
- in_channels=channels_list[6],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- ),
- # reg_pred0_1
- nn.Conv2d(
- in_channels=channels_list[6],
- out_channels=4 * (num_anchors),
- kernel_size=1
- ),
- # stem1
- ConvBNSiLU(
- in_channels=channels_list[8],
- out_channels=channels_list[8],
- kernel_size=1,
- stride=1
- ),
- # cls_conv1
- ConvBNSiLU(
- in_channels=channels_list[8],
- out_channels=channels_list[8],
- kernel_size=3,
- stride=1
- ),
- # reg_conv1
- ConvBNSiLU(
- in_channels=channels_list[8],
- out_channels=channels_list[8],
- kernel_size=3,
- stride=1
- ),
- # cls_pred1
- nn.Conv2d(
- in_channels=channels_list[8],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred1
- nn.Conv2d(
- in_channels=channels_list[8],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- ),
- # reg_pred1_1
- nn.Conv2d(
- in_channels=channels_list[8],
- out_channels=4 * (num_anchors),
- kernel_size=1
- ),
- # stem2
- ConvBNSiLU(
- in_channels=channels_list[10],
- out_channels=channels_list[10],
- kernel_size=1,
- stride=1
- ),
- # cls_conv2
- ConvBNSiLU(
- in_channels=channels_list[10],
- out_channels=channels_list[10],
- kernel_size=3,
- stride=1
- ),
- # reg_conv2
- ConvBNSiLU(
- in_channels=channels_list[10],
- out_channels=channels_list[10],
- kernel_size=3,
- stride=1
- ),
- # cls_pred2
- nn.Conv2d(
- in_channels=channels_list[10],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred2
- nn.Conv2d(
- in_channels=channels_list[10],
- out_channels=4 * (reg_max + num_anchors),
- kernel_size=1
- ),
- # reg_pred2_1
- nn.Conv2d(
- in_channels=channels_list[10],
- out_channels=4 * (num_anchors),
- kernel_size=1
- )
- )
- return head_layers
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_fuseab.py b/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_fuseab.py
deleted file mode 100644
index 718ae3168a492571d830b394d30814ec7d987cd3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_fuseab.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import math
-from yolov6.layers.common import *
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox
-
-
-class Detect(nn.Module):
- export = False
- '''Efficient Decoupled Head for fusing anchor-base branches.
- '''
- def __init__(self, num_classes=80, anchors=None, num_layers=3, inplace=True, head_layers=None, use_dfl=True, reg_max=16): # detection layer
- super().__init__()
- assert head_layers is not None
- self.nc = num_classes # number of classes
- self.no = num_classes + 5 # number of outputs per anchor
- self.nl = num_layers # number of detection layers
- if isinstance(anchors, (list, tuple)):
- self.na = len(anchors[0]) // 2
- else:
- self.na = anchors
- self.grid = [torch.zeros(1)] * num_layers
- self.prior_prob = 1e-2
- self.inplace = inplace
- stride = [8, 16, 32] if num_layers == 3 else [8, 16, 32, 64] # strides computed during build
- self.stride = torch.tensor(stride)
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj_conv = nn.Conv2d(self.reg_max + 1, 1, 1, bias=False)
- self.grid_cell_offset = 0.5
- self.grid_cell_size = 5.0
- self.anchors_init= ((torch.tensor(anchors) / self.stride[:,None])).reshape(self.nl, self.na, 2)
-
- # Init decouple head
- self.stems = nn.ModuleList()
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- self.cls_preds = nn.ModuleList()
- self.reg_preds = nn.ModuleList()
- self.cls_preds_ab = nn.ModuleList()
- self.reg_preds_ab = nn.ModuleList()
-
- # Efficient decoupled head layers
- for i in range(num_layers):
- idx = i*7
- self.stems.append(head_layers[idx])
- self.cls_convs.append(head_layers[idx+1])
- self.reg_convs.append(head_layers[idx+2])
- self.cls_preds.append(head_layers[idx+3])
- self.reg_preds.append(head_layers[idx+4])
- self.cls_preds_ab.append(head_layers[idx+5])
- self.reg_preds_ab.append(head_layers[idx+6])
-
- def initialize_biases(self):
-
- for conv in self.cls_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(-math.log((1 - self.prior_prob) / self.prior_prob))
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.cls_preds_ab:
- b = conv.bias.view(-1, )
- b.data.fill_(-math.log((1 - self.prior_prob) / self.prior_prob))
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.reg_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(1.0)
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.reg_preds_ab:
- b = conv.bias.view(-1, )
- b.data.fill_(1.0)
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.proj_conv.weight = nn.Parameter(self.proj.view([1, self.reg_max + 1, 1, 1]).clone().detach(),
- requires_grad=False)
-
- def forward(self, x):
- if self.training:
- device = x[0].device
- cls_score_list_af = []
- reg_dist_list_af = []
- cls_score_list_ab = []
- reg_dist_list_ab = []
-
- for i in range(self.nl):
- b, _, h, w = x[i].shape
- l = h * w
-
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
-
- cls_feat = self.cls_convs[i](cls_x)
- reg_feat = self.reg_convs[i](reg_x)
-
- #anchor_base
- cls_output_ab = self.cls_preds_ab[i](cls_feat)
- reg_output_ab = self.reg_preds_ab[i](reg_feat)
-
- cls_output_ab = torch.sigmoid(cls_output_ab)
- cls_output_ab = cls_output_ab.reshape(b, self.na, -1, h, w).permute(0,1,3,4,2)
- cls_score_list_ab.append(cls_output_ab.flatten(1,3))
-
- reg_output_ab = reg_output_ab.reshape(b, self.na, -1, h, w).permute(0,1,3,4,2)
- reg_output_ab[..., 2:4] = ((reg_output_ab[..., 2:4].sigmoid() * 2) ** 2 ) * (self.anchors_init[i].reshape(1, self.na, 1, 1, 2).to(device))
- reg_dist_list_ab.append(reg_output_ab.flatten(1,3))
-
- #anchor_free
- cls_output_af = self.cls_preds[i](cls_feat)
- reg_output_af = self.reg_preds[i](reg_feat)
-
- cls_output_af = torch.sigmoid(cls_output_af)
- cls_score_list_af.append(cls_output_af.flatten(2).permute((0, 2, 1)))
- reg_dist_list_af.append(reg_output_af.flatten(2).permute((0, 2, 1)))
-
-
- cls_score_list_ab = torch.cat(cls_score_list_ab, axis=1)
- reg_dist_list_ab = torch.cat(reg_dist_list_ab, axis=1)
- cls_score_list_af = torch.cat(cls_score_list_af, axis=1)
- reg_dist_list_af = torch.cat(reg_dist_list_af, axis=1)
-
- return x, cls_score_list_ab, reg_dist_list_ab, cls_score_list_af, reg_dist_list_af
-
- else:
- device = x[0].device
- cls_score_list_af = []
- reg_dist_list_af = []
-
- for i in range(self.nl):
- b, _, h, w = x[i].shape
- l = h * w
-
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
-
- cls_feat = self.cls_convs[i](cls_x)
- reg_feat = self.reg_convs[i](reg_x)
-
- #anchor_free
- cls_output_af = self.cls_preds[i](cls_feat)
- reg_output_af = self.reg_preds[i](reg_feat)
-
- if self.use_dfl:
- reg_output_af = reg_output_af.reshape([-1, 4, self.reg_max + 1, l]).permute(0, 2, 1, 3)
- reg_output_af = self.proj_conv(F.softmax(reg_output_af, dim=1))
-
- cls_output_af = torch.sigmoid(cls_output_af)
-
- if self.export:
- cls_score_list_af.append(cls_output_af)
- reg_dist_list_af.append(reg_output_af)
- else:
- cls_score_list_af.append(cls_output_af.reshape([b, self.nc, l]))
- reg_dist_list_af.append(reg_output_af.reshape([b, 4, l]))
-
- if self.export:
- return tuple(torch.cat([cls, reg], 1) for cls, reg in zip(cls_score_list_af, reg_dist_list_af))
-
- cls_score_list_af = torch.cat(cls_score_list_af, axis=-1).permute(0, 2, 1)
- reg_dist_list_af = torch.cat(reg_dist_list_af, axis=-1).permute(0, 2, 1)
-
-
- #anchor_free
- anchor_points_af, stride_tensor_af = generate_anchors(
- x, self.stride, self.grid_cell_size, self.grid_cell_offset, device=x[0].device, is_eval=True, mode='af')
-
- pred_bboxes_af = dist2bbox(reg_dist_list_af, anchor_points_af, box_format='xywh')
- pred_bboxes_af *= stride_tensor_af
-
- pred_bboxes = pred_bboxes_af
- cls_score_list = cls_score_list_af
-
- return torch.cat(
- [
- pred_bboxes,
- torch.ones((b, pred_bboxes.shape[1], 1), device=pred_bboxes.device, dtype=pred_bboxes.dtype),
- cls_score_list
- ],
- axis=-1)
-
-
-def build_effidehead_layer(channels_list, num_anchors, num_classes, reg_max=16, num_layers=3):
-
- chx = [6, 8, 10] if num_layers == 3 else [8, 9, 10, 11]
-
- head_layers = nn.Sequential(
- # stem0
- ConvBNSiLU(
- in_channels=channels_list[chx[0]],
- out_channels=channels_list[chx[0]],
- kernel_size=1,
- stride=1
- ),
- # cls_conv0
- ConvBNSiLU(
- in_channels=channels_list[chx[0]],
- out_channels=channels_list[chx[0]],
- kernel_size=3,
- stride=1
- ),
- # reg_conv0
- ConvBNSiLU(
- in_channels=channels_list[chx[0]],
- out_channels=channels_list[chx[0]],
- kernel_size=3,
- stride=1
- ),
- # cls_pred0_af
- nn.Conv2d(
- in_channels=channels_list[chx[0]],
- out_channels=num_classes,
- kernel_size=1
- ),
- # reg_pred0_af
- nn.Conv2d(
- in_channels=channels_list[chx[0]],
- out_channels=4 * (reg_max + 1),
- kernel_size=1
- ),
- # cls_pred0_3ab
- nn.Conv2d(
- in_channels=channels_list[chx[0]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred0_3ab
- nn.Conv2d(
- in_channels=channels_list[chx[0]],
- out_channels=4 * num_anchors,
- kernel_size=1
- ),
- # stem1
- ConvBNSiLU(
- in_channels=channels_list[chx[1]],
- out_channels=channels_list[chx[1]],
- kernel_size=1,
- stride=1
- ),
- # cls_conv1
- ConvBNSiLU(
- in_channels=channels_list[chx[1]],
- out_channels=channels_list[chx[1]],
- kernel_size=3,
- stride=1
- ),
- # reg_conv1
- ConvBNSiLU(
- in_channels=channels_list[chx[1]],
- out_channels=channels_list[chx[1]],
- kernel_size=3,
- stride=1
- ),
- # cls_pred1_af
- nn.Conv2d(
- in_channels=channels_list[chx[1]],
- out_channels=num_classes,
- kernel_size=1
- ),
- # reg_pred1_af
- nn.Conv2d(
- in_channels=channels_list[chx[1]],
- out_channels=4 * (reg_max + 1),
- kernel_size=1
- ),
- # cls_pred1_3ab
- nn.Conv2d(
- in_channels=channels_list[chx[1]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred1_3ab
- nn.Conv2d(
- in_channels=channels_list[chx[1]],
- out_channels=4 * num_anchors,
- kernel_size=1
- ),
- # stem2
- ConvBNSiLU(
- in_channels=channels_list[chx[2]],
- out_channels=channels_list[chx[2]],
- kernel_size=1,
- stride=1
- ),
- # cls_conv2
- ConvBNSiLU(
- in_channels=channels_list[chx[2]],
- out_channels=channels_list[chx[2]],
- kernel_size=3,
- stride=1
- ),
- # reg_conv2
- ConvBNSiLU(
- in_channels=channels_list[chx[2]],
- out_channels=channels_list[chx[2]],
- kernel_size=3,
- stride=1
- ),
- # cls_pred2_af
- nn.Conv2d(
- in_channels=channels_list[chx[2]],
- out_channels=num_classes,
- kernel_size=1
- ),
- # reg_pred2_af
- nn.Conv2d(
- in_channels=channels_list[chx[2]],
- out_channels=4 * (reg_max + 1),
- kernel_size=1
- ),
- # cls_pred2_3ab
- nn.Conv2d(
- in_channels=channels_list[chx[2]],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred2_3ab
- nn.Conv2d(
- in_channels=channels_list[chx[2]],
- out_channels=4 * num_anchors,
- kernel_size=1
- ),
- )
-
- return head_layers
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_lite.py b/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_lite.py
deleted file mode 100644
index dc6f634026a50c0615ba79f6a34c975632080606..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/heads/effidehead_lite.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import math
-from yolov6.layers.common import DPBlock
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox
-
-
-class Detect(nn.Module):
- export = False
- '''Efficient Decoupled Head
- With hardware-aware degisn, the decoupled head is optimized with
- hybridchannels methods.
- '''
- def __init__(self, num_classes=80, num_layers=3, inplace=True, head_layers=None): # detection layer
- super().__init__()
- assert head_layers is not None
- self.nc = num_classes # number of classes
- self.no = num_classes + 5 # number of outputs per anchor
- self.nl = num_layers # number of detection layers
- self.grid = [torch.zeros(1)] * num_layers
- self.prior_prob = 1e-2
- self.inplace = inplace
- stride = [8, 16, 32] if num_layers == 3 else [8, 16, 32, 64] # strides computed during build
- self.stride = torch.tensor(stride)
- self.grid_cell_offset = 0.5
- self.grid_cell_size = 5.0
-
- # Init decouple head
- self.stems = nn.ModuleList()
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- self.cls_preds = nn.ModuleList()
- self.reg_preds = nn.ModuleList()
-
- # Efficient decoupled head layers
- for i in range(num_layers):
- idx = i*5
- self.stems.append(head_layers[idx])
- self.cls_convs.append(head_layers[idx+1])
- self.reg_convs.append(head_layers[idx+2])
- self.cls_preds.append(head_layers[idx+3])
- self.reg_preds.append(head_layers[idx+4])
-
- def initialize_biases(self):
-
- for conv in self.cls_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(-math.log((1 - self.prior_prob) / self.prior_prob))
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- for conv in self.reg_preds:
- b = conv.bias.view(-1, )
- b.data.fill_(1.0)
- conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- w = conv.weight
- w.data.fill_(0.)
- conv.weight = torch.nn.Parameter(w, requires_grad=True)
-
- def forward(self, x):
- if self.training:
- cls_score_list = []
- reg_distri_list = []
-
- for i in range(self.nl):
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
- cls_feat = self.cls_convs[i](cls_x)
- cls_output = self.cls_preds[i](cls_feat)
- reg_feat = self.reg_convs[i](reg_x)
- reg_output = self.reg_preds[i](reg_feat)
-
- cls_output = torch.sigmoid(cls_output)
- cls_score_list.append(cls_output.flatten(2).permute((0, 2, 1)))
- reg_distri_list.append(reg_output.flatten(2).permute((0, 2, 1)))
-
- cls_score_list = torch.cat(cls_score_list, axis=1)
- reg_distri_list = torch.cat(reg_distri_list, axis=1)
-
- return x, cls_score_list, reg_distri_list
- else:
- cls_score_list = []
- reg_dist_list = []
-
- for i in range(self.nl):
- b, _, h, w = x[i].shape
- l = h * w
- x[i] = self.stems[i](x[i])
- cls_x = x[i]
- reg_x = x[i]
- cls_feat = self.cls_convs[i](cls_x)
- cls_output = self.cls_preds[i](cls_feat)
- reg_feat = self.reg_convs[i](reg_x)
- reg_output = self.reg_preds[i](reg_feat)
-
- cls_output = torch.sigmoid(cls_output)
-
- if self.export:
- cls_score_list.append(cls_output)
- reg_dist_list.append(reg_output)
- else:
- cls_score_list.append(cls_output.reshape([b, self.nc, l]))
- reg_dist_list.append(reg_output.reshape([b, 4, l]))
-
-
- if self.export:
- return tuple(torch.cat([cls, reg], 1) for cls, reg in zip(cls_score_list, reg_dist_list))
-
- cls_score_list = torch.cat(cls_score_list, axis=-1).permute(0, 2, 1)
- reg_dist_list = torch.cat(reg_dist_list, axis=-1).permute(0, 2, 1)
-
-
- anchor_points, stride_tensor = generate_anchors(
- x, self.stride, self.grid_cell_size, self.grid_cell_offset, device=x[0].device, is_eval=True, mode='af')
-
- pred_bboxes = dist2bbox(reg_dist_list, anchor_points, box_format='xywh')
- pred_bboxes *= stride_tensor
- return torch.cat(
- [
- pred_bboxes,
- torch.ones((b, pred_bboxes.shape[1], 1), device=pred_bboxes.device, dtype=pred_bboxes.dtype),
- cls_score_list
- ],
- axis=-1)
-
-def build_effidehead_layer(channels_list, num_anchors, num_classes, num_layers):
-
- head_layers = nn.Sequential(
- # stem0
- DPBlock(
- in_channel=channels_list[0],
- out_channel=channels_list[0],
- kernel_size=5,
- stride=1
- ),
- # cls_conv0
- DPBlock(
- in_channel=channels_list[0],
- out_channel=channels_list[0],
- kernel_size=5,
- stride=1
- ),
- # reg_conv0
- DPBlock(
- in_channel=channels_list[0],
- out_channel=channels_list[0],
- kernel_size=5,
- stride=1
- ),
- # cls_pred0
- nn.Conv2d(
- in_channels=channels_list[0],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred0
- nn.Conv2d(
- in_channels=channels_list[0],
- out_channels=4 * num_anchors,
- kernel_size=1
- ),
- # stem1
- DPBlock(
- in_channel=channels_list[1],
- out_channel=channels_list[1],
- kernel_size=5,
- stride=1
- ),
- # cls_conv1
- DPBlock(
- in_channel=channels_list[1],
- out_channel=channels_list[1],
- kernel_size=5,
- stride=1
- ),
- # reg_conv1
- DPBlock(
- in_channel=channels_list[1],
- out_channel=channels_list[1],
- kernel_size=5,
- stride=1
- ),
- # cls_pred1
- nn.Conv2d(
- in_channels=channels_list[1],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred1
- nn.Conv2d(
- in_channels=channels_list[1],
- out_channels=4 * num_anchors,
- kernel_size=1
- ),
- # stem2
- DPBlock(
- in_channel=channels_list[2],
- out_channel=channels_list[2],
- kernel_size=5,
- stride=1
- ),
- # cls_conv2
- DPBlock(
- in_channel=channels_list[2],
- out_channel=channels_list[2],
- kernel_size=5,
- stride=1
- ),
- # reg_conv2
- DPBlock(
- in_channel=channels_list[2],
- out_channel=channels_list[2],
- kernel_size=5,
- stride=1
- ),
- # cls_pred2
- nn.Conv2d(
- in_channels=channels_list[2],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- ),
- # reg_pred2
- nn.Conv2d(
- in_channels=channels_list[2],
- out_channels=4 * num_anchors,
- kernel_size=1
- )
- )
-
- if num_layers == 4:
- head_layers.add_module('stem3',
- # stem3
- DPBlock(
- in_channel=channels_list[3],
- out_channel=channels_list[3],
- kernel_size=5,
- stride=1
- )
- )
- head_layers.add_module('cls_conv3',
- # cls_conv3
- DPBlock(
- in_channel=channels_list[3],
- out_channel=channels_list[3],
- kernel_size=5,
- stride=1
- )
- )
- head_layers.add_module('reg_conv3',
- # reg_conv3
- DPBlock(
- in_channel=channels_list[3],
- out_channel=channels_list[3],
- kernel_size=5,
- stride=1
- )
- )
- head_layers.add_module('cls_pred3',
- # cls_pred3
- nn.Conv2d(
- in_channels=channels_list[3],
- out_channels=num_classes * num_anchors,
- kernel_size=1
- )
- )
- head_layers.add_module('reg_pred3',
- # reg_pred3
- nn.Conv2d(
- in_channels=channels_list[3],
- out_channels=4 * num_anchors,
- kernel_size=1
- )
- )
-
- return head_layers
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss.py b/cv/detection/yolov6/pytorch/yolov6/models/losses/loss.py
deleted file mode 100644
index 0b1d39c4b382f0a10f63905c94d327d9b4a911b4..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# Copyright (c) 2023, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-import torch
-import torch.nn as nn
-import numpy as np
-import torch.nn.functional as F
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox, bbox2dist, xywh2xyxy, box_iou
-from yolov6.utils.figure_iou import IOUloss
-from yolov6.assigners.atss_assigner import ATSSAssigner
-from yolov6.assigners.tal_assigner import TaskAlignedAssigner
-
-class ComputeLoss:
- '''Loss computation func.'''
- def __init__(self,
- fpn_strides=[8, 16, 32],
- grid_cell_size=5.0,
- grid_cell_offset=0.5,
- num_classes=80,
- ori_img_size=640,
- warmup_epoch=4,
- use_dfl=True,
- reg_max=16,
- iou_type='giou',
- loss_weight={
- 'class': 1.0,
- 'iou': 2.5,
- 'dfl': 0.5},
- ):
-
- self.fpn_strides = fpn_strides
- self.grid_cell_size = grid_cell_size
- self.grid_cell_offset = grid_cell_offset
- self.num_classes = num_classes
- self.ori_img_size = ori_img_size
-
- self.warmup_epoch = warmup_epoch
- self.warmup_assigner = ATSSAssigner(9, num_classes=self.num_classes)
- self.formal_assigner = TaskAlignedAssigner(topk=13, num_classes=self.num_classes, alpha=1.0, beta=6.0)
-
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.iou_type = iou_type
- self.varifocal_loss = VarifocalLoss().cuda()
- self.bbox_loss = BboxLoss(self.num_classes, self.reg_max, self.use_dfl, self.iou_type).cuda()
- self.loss_weight = loss_weight
-
- def __call__(
- self,
- outputs,
- targets,
- epoch_num,
- step_num,
- batch_height,
- batch_width
- ):
-
- feats, pred_scores, pred_distri = outputs
- anchors, anchor_points, n_anchors_list, stride_tensor = \
- generate_anchors(feats, self.fpn_strides, self.grid_cell_size, self.grid_cell_offset, device=feats[0].device)
-
- assert pred_scores.type() == pred_distri.type()
- gt_bboxes_scale = torch.tensor([batch_width, batch_height, batch_width, batch_height]).type_as(pred_scores)
- batch_size = pred_scores.shape[0]
-
- # targets
- targets =self.preprocess(targets, batch_size, gt_bboxes_scale)
- gt_labels = targets[:, :, :1]
- gt_bboxes = targets[:, :, 1:] #xyxy
- mask_gt = (gt_bboxes.sum(-1, keepdim=True) > 0).float()
-
- # pboxes
- anchor_points_s = anchor_points / stride_tensor
- pred_bboxes = self.bbox_decode(anchor_points_s, pred_distri) #xyxy
-
- try:
- if epoch_num < self.warmup_epoch:
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.warmup_assigner(
- anchors,
- n_anchors_list,
- gt_labels,
- gt_bboxes,
- mask_gt,
- pred_bboxes.detach() * stride_tensor)
- else:
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- pred_scores.detach(),
- pred_bboxes.detach() * stride_tensor,
- anchor_points,
- gt_labels,
- gt_bboxes,
- mask_gt)
-
- except RuntimeError:
- print(
- "OOM RuntimeError is raised due to the huge memory cost during label assignment. \
- CPU mode is applied in this batch. If you want to avoid this issue, \
- try to reduce the batch size or image size."
- )
- torch.cuda.empty_cache()
- print("------------CPU Mode for This Batch-------------")
- if epoch_num < self.warmup_epoch:
- _anchors = anchors.cpu().float()
- _n_anchors_list = n_anchors_list
- _gt_labels = gt_labels.cpu().float()
- _gt_bboxes = gt_bboxes.cpu().float()
- _mask_gt = mask_gt.cpu().float()
- _pred_bboxes = pred_bboxes.detach().cpu().float()
- _stride_tensor = stride_tensor.cpu().float()
-
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.warmup_assigner(
- _anchors,
- _n_anchors_list,
- _gt_labels,
- _gt_bboxes,
- _mask_gt,
- _pred_bboxes * _stride_tensor)
-
- else:
- _pred_scores = pred_scores.detach().cpu().float()
- _pred_bboxes = pred_bboxes.detach().cpu().float()
- _anchor_points = anchor_points.cpu().float()
- _gt_labels = gt_labels.cpu().float()
- _gt_bboxes = gt_bboxes.cpu().float()
- _mask_gt = mask_gt.cpu().float()
- _stride_tensor = stride_tensor.cpu().float()
-
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- _pred_scores,
- _pred_bboxes * _stride_tensor,
- _anchor_points,
- _gt_labels,
- _gt_bboxes,
- _mask_gt)
-
- target_labels = target_labels.cuda()
- target_bboxes = target_bboxes.cuda()
- target_scores = target_scores.cuda()
- fg_mask = fg_mask.cuda()
- #Dynamic release GPU memory
- if step_num % 10 == 0:
- torch.cuda.empty_cache()
-
- # rescale bbox
- target_bboxes /= stride_tensor
-
- # cls loss
- target_labels = torch.where(fg_mask > 0, target_labels, torch.full_like(target_labels, self.num_classes))
- one_hot_label = F.one_hot(target_labels.long(), self.num_classes + 1)[..., :-1]
- loss_cls = self.varifocal_loss(pred_scores, target_scores, one_hot_label)
-
- target_scores_sum = target_scores.sum()
- # avoid devide zero error, devide by zero will cause loss to be inf or nan.
- # if target_scores_sum is 0, loss_cls equals to 0 alson
- if target_scores_sum > 1:
- loss_cls /= target_scores_sum
-
- # bbox loss
- loss_iou, loss_dfl = self.bbox_loss(pred_distri, pred_bboxes, anchor_points_s, target_bboxes,
- target_scores, target_scores_sum, fg_mask)
-
- loss = self.loss_weight['class'] * loss_cls + \
- self.loss_weight['iou'] * loss_iou + \
- self.loss_weight['dfl'] * loss_dfl
-
- return loss, \
- torch.cat(((self.loss_weight['iou'] * loss_iou).unsqueeze(0),
- (self.loss_weight['dfl'] * loss_dfl).unsqueeze(0),
- (self.loss_weight['class'] * loss_cls).unsqueeze(0))).detach()
-
- def preprocess(self, targets, batch_size, scale_tensor):
- targets_list = np.zeros((batch_size, 1, 5)).tolist()
- for i, item in enumerate(targets.cpu().numpy().tolist()):
- targets_list[int(item[0])].append(item[1:])
- max_len = max((len(l) for l in targets_list))
- targets = torch.from_numpy(np.array(list(map(lambda l:l + [[-1,0,0,0,0]]*(max_len - len(l)), targets_list)),dtype=np.float32)[:,1:,:]).to(targets.device)
- batch_target = targets[:, :, 1:5].mul_(scale_tensor)
- targets[..., 1:] = xywh2xyxy(batch_target)
- return targets
-
- def bbox_decode(self, anchor_points, pred_dist):
- if self.use_dfl:
- batch_size, n_anchors, _ = pred_dist.shape
- pred_dist = F.softmax(pred_dist.view(batch_size, n_anchors, 4, self.reg_max + 1), dim=-1).matmul(self.proj.to(pred_dist.device))
- return dist2bbox(pred_dist, anchor_points)
-
-
-class VarifocalLoss(nn.Module):
- def __init__(self):
- super(VarifocalLoss, self).__init__()
-
- def forward(self, pred_score,gt_score, label, alpha=0.75, gamma=2.0):
-
- weight = alpha * pred_score.pow(gamma) * (1 - label) + gt_score * label
- with torch.cuda.amp.autocast(enabled=False):
- loss = (F.binary_cross_entropy(pred_score.float(), gt_score.float(), reduction='none') * weight).sum()
-
- return loss
-
-
-class BboxLoss(nn.Module):
- def __init__(self, num_classes, reg_max, use_dfl=False, iou_type='giou'):
- super(BboxLoss, self).__init__()
- self.num_classes = num_classes
- self.iou_loss = IOUloss(box_format='xyxy', iou_type=iou_type, eps=1e-10)
- self.reg_max = reg_max
- self.use_dfl = use_dfl
-
- def forward(self, pred_dist, pred_bboxes, anchor_points,
- target_bboxes, target_scores, target_scores_sum, fg_mask):
-
- # select positive samples mask
- num_pos = fg_mask.sum()
- if num_pos > 0:
- # iou loss
- bbox_mask = fg_mask.unsqueeze(-1).repeat([1, 1, 4])
- pred_bboxes_pos = torch.masked_select(pred_bboxes,
- bbox_mask).reshape([-1, 4])
- target_bboxes_pos = torch.masked_select(
- target_bboxes, bbox_mask).reshape([-1, 4])
- bbox_weight = torch.masked_select(
- target_scores.sum(-1), fg_mask).unsqueeze(-1)
- loss_iou = self.iou_loss(pred_bboxes_pos,
- target_bboxes_pos) * bbox_weight
- if target_scores_sum > 1:
- loss_iou = loss_iou.sum() / target_scores_sum
- else:
- loss_iou = loss_iou.sum()
-
- # dfl loss
- if self.use_dfl:
- dist_mask = fg_mask.unsqueeze(-1).repeat(
- [1, 1, (self.reg_max + 1) * 4])
- pred_dist_pos = torch.masked_select(
- pred_dist, dist_mask).reshape([-1, 4, self.reg_max + 1])
- target_ltrb = bbox2dist(anchor_points, target_bboxes, self.reg_max)
- target_ltrb_pos = torch.masked_select(
- target_ltrb, bbox_mask).reshape([-1, 4])
- loss_dfl = self._df_loss(pred_dist_pos,
- target_ltrb_pos) * bbox_weight
- if target_scores_sum > 1:
- loss_dfl = loss_dfl.sum() / target_scores_sum
- else:
- loss_dfl = loss_dfl.sum()
- else:
- loss_dfl = pred_dist.sum() * 0.
-
- else:
- loss_iou = pred_dist.sum() * 0.
- loss_dfl = pred_dist.sum() * 0.
-
- return loss_iou, loss_dfl
-
- def _df_loss(self, pred_dist, target):
- target_left = target.to(torch.long)
- target_right = target_left + 1
- weight_left = target_right.to(torch.float) - target
- weight_right = 1 - weight_left
- loss_left = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_left.view(-1), reduction='none').view(
- target_left.shape) * weight_left
- loss_right = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_right.view(-1), reduction='none').view(
- target_left.shape) * weight_right
- return (loss_left + loss_right).mean(-1, keepdim=True)
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_distill.py b/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_distill.py
deleted file mode 100644
index afc46ef2e3734ba09a1ea72248b1e20846cb4e7b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_distill.py
+++ /dev/null
@@ -1,362 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-import torch
-import torch.nn as nn
-import numpy as np
-import torch.nn.functional as F
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox, bbox2dist, xywh2xyxy
-from yolov6.utils.figure_iou import IOUloss
-from yolov6.assigners.atss_assigner import ATSSAssigner
-from yolov6.assigners.tal_assigner import TaskAlignedAssigner
-
-
-class ComputeLoss:
- '''Loss computation func.'''
- def __init__(self,
- fpn_strides=[8, 16, 32],
- grid_cell_size=5.0,
- grid_cell_offset=0.5,
- num_classes=80,
- ori_img_size=640,
- warmup_epoch=0,
- use_dfl=True,
- reg_max=16,
- iou_type='giou',
- loss_weight={
- 'class': 1.0,
- 'iou': 2.5,
- 'dfl': 0.5,
- 'cwd': 10.0},
- distill_feat = False,
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- }
- ):
-
- self.fpn_strides = fpn_strides
- self.grid_cell_size = grid_cell_size
- self.grid_cell_offset = grid_cell_offset
- self.num_classes = num_classes
- self.ori_img_size = ori_img_size
-
- self.warmup_epoch = warmup_epoch
- self.warmup_assigner = ATSSAssigner(9, num_classes=self.num_classes)
- self.formal_assigner = TaskAlignedAssigner(topk=13, num_classes=self.num_classes, alpha=1.0, beta=6.0)
-
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.iou_type = iou_type
- self.varifocal_loss = VarifocalLoss().cuda()
- self.bbox_loss = BboxLoss(self.num_classes, self.reg_max, self.use_dfl, self.iou_type).cuda()
- self.loss_weight = loss_weight
- self.distill_feat = distill_feat
- self.distill_weight = distill_weight
-
- def __call__(
- self,
- outputs,
- t_outputs,
- s_featmaps,
- t_featmaps,
- targets,
- epoch_num,
- max_epoch,
- temperature,
- step_num,
- batch_height,
- batch_width
- ):
-
- feats, pred_scores, pred_distri = outputs
- t_feats, t_pred_scores, t_pred_distri = t_outputs[0], t_outputs[-2], t_outputs[-1]
- anchors, anchor_points, n_anchors_list, stride_tensor = \
- generate_anchors(feats, self.fpn_strides, self.grid_cell_size, self.grid_cell_offset, device=feats[0].device)
- t_anchors, t_anchor_points, t_n_anchors_list, t_stride_tensor = \
- generate_anchors(t_feats, self.fpn_strides, self.grid_cell_size, self.grid_cell_offset, device=feats[0].device)
-
- assert pred_scores.type() == pred_distri.type()
- gt_bboxes_scale = torch.tensor([batch_width, batch_height, batch_width, batch_height]).type_as(pred_scores)
- batch_size = pred_scores.shape[0]
-
- # targets
- targets =self.preprocess(targets, batch_size, gt_bboxes_scale)
- gt_labels = targets[:, :, :1]
- gt_bboxes = targets[:, :, 1:] #xyxy
- mask_gt = (gt_bboxes.sum(-1, keepdim=True) > 0).float()
-
- # pboxes
- anchor_points_s = anchor_points / stride_tensor
- pred_bboxes = self.bbox_decode(anchor_points_s, pred_distri) #xyxy
- t_anchor_points_s = t_anchor_points / t_stride_tensor
- t_pred_bboxes = self.bbox_decode(t_anchor_points_s, t_pred_distri) #xyxy
-
- try:
- if epoch_num < self.warmup_epoch:
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.warmup_assigner(
- anchors,
- n_anchors_list,
- gt_labels,
- gt_bboxes,
- mask_gt,
- pred_bboxes.detach() * stride_tensor)
- else:
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- pred_scores.detach(),
- pred_bboxes.detach() * stride_tensor,
- anchor_points,
- gt_labels,
- gt_bboxes,
- mask_gt)
-
- except RuntimeError:
- print(
- "OOM RuntimeError is raised due to the huge memory cost during label assignment. \
- CPU mode is applied in this batch. If you want to avoid this issue, \
- try to reduce the batch size or image size."
- )
- torch.cuda.empty_cache()
- print("------------CPU Mode for This Batch-------------")
- if epoch_num < self.warmup_epoch:
- _anchors = anchors.cpu().float()
- _n_anchors_list = n_anchors_list
- _gt_labels = gt_labels.cpu().float()
- _gt_bboxes = gt_bboxes.cpu().float()
- _mask_gt = mask_gt.cpu().float()
- _pred_bboxes = pred_bboxes.detach().cpu().float()
- _stride_tensor = stride_tensor.cpu().float()
-
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.warmup_assigner(
- _anchors,
- _n_anchors_list,
- _gt_labels,
- _gt_bboxes,
- _mask_gt,
- _pred_bboxes * _stride_tensor)
-
- else:
- _pred_scores = pred_scores.detach().cpu().float()
- _pred_bboxes = pred_bboxes.detach().cpu().float()
- _anchor_points = anchor_points.cpu().float()
- _gt_labels = gt_labels.cpu().float()
- _gt_bboxes = gt_bboxes.cpu().float()
- _mask_gt = mask_gt.cpu().float()
- _stride_tensor = stride_tensor.cpu().float()
-
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- _pred_scores,
- _pred_bboxes * _stride_tensor,
- _anchor_points,
- _gt_labels,
- _gt_bboxes,
- _mask_gt)
-
- target_labels = target_labels.cuda()
- target_bboxes = target_bboxes.cuda()
- target_scores = target_scores.cuda()
- fg_mask = fg_mask.cuda()
-
- #Dynamic release GPU memory
- if step_num % 10 == 0:
- torch.cuda.empty_cache()
-
- # rescale bbox
- target_bboxes /= stride_tensor
-
- # cls loss
- target_labels = torch.where(fg_mask > 0, target_labels, torch.full_like(target_labels, self.num_classes))
- one_hot_label = F.one_hot(target_labels.long(), self.num_classes + 1)[..., :-1]
- loss_cls = self.varifocal_loss(pred_scores, target_scores, one_hot_label)
-
- target_scores_sum = target_scores.sum()
- # avoid devide zero error, devide by zero will cause loss to be inf or nan.
- if target_scores_sum > 0:
- loss_cls /= target_scores_sum
-
- # bbox loss
- loss_iou, loss_dfl, d_loss_dfl = self.bbox_loss(pred_distri, pred_bboxes, t_pred_distri, t_pred_bboxes, temperature, anchor_points_s,
- target_bboxes, target_scores, target_scores_sum, fg_mask)
-
- logits_student = pred_scores
- logits_teacher = t_pred_scores
- distill_num_classes = self.num_classes
- d_loss_cls = self.distill_loss_cls(logits_student, logits_teacher, distill_num_classes, temperature)
- if self.distill_feat:
- d_loss_cw = self.distill_loss_cw(s_featmaps, t_featmaps)
- else:
- d_loss_cw = torch.tensor(0.).to(feats[0].device)
- import math
- distill_weightdecay = ((1 - math.cos(epoch_num * math.pi / max_epoch)) / 2) * (0.01- 1) + 1
- d_loss_dfl *= distill_weightdecay
- d_loss_cls *= distill_weightdecay
- d_loss_cw *= distill_weightdecay
- loss_cls_all = loss_cls + d_loss_cls * self.distill_weight['class']
- loss_dfl_all = loss_dfl + d_loss_dfl * self.distill_weight['dfl']
- loss = self.loss_weight['class'] * loss_cls_all + \
- self.loss_weight['iou'] * loss_iou + \
- self.loss_weight['dfl'] * loss_dfl_all + \
- self.loss_weight['cwd'] * d_loss_cw
-
- return loss, \
- torch.cat(((self.loss_weight['iou'] * loss_iou).unsqueeze(0),
- (self.loss_weight['dfl'] * loss_dfl_all).unsqueeze(0),
- (self.loss_weight['class'] * loss_cls_all).unsqueeze(0),
- (self.loss_weight['cwd'] * d_loss_cw).unsqueeze(0))).detach()
-
- def distill_loss_cls(self, logits_student, logits_teacher, num_classes, temperature=20):
- logits_student = logits_student.view(-1, num_classes)
- logits_teacher = logits_teacher.view(-1, num_classes)
- pred_student = F.softmax(logits_student / temperature, dim=1)
- pred_teacher = F.softmax(logits_teacher / temperature, dim=1)
- log_pred_student = torch.log(pred_student)
-
- d_loss_cls = F.kl_div(log_pred_student, pred_teacher, reduction="sum")
- d_loss_cls *= temperature**2
- return d_loss_cls
- def distill_loss_cw(self, s_feats, t_feats, temperature=1):
- N,C,H,W = s_feats[0].shape
- # print(N,C,H,W)
- loss_cw = F.kl_div(F.log_softmax(s_feats[0].view(N,C,H*W)/temperature, dim=2),
- F.log_softmax(t_feats[0].view(N,C,H*W).detach()/temperature, dim=2),
- reduction='sum',
- log_target=True) * (temperature * temperature)/ (N*C)
-
- N,C,H,W = s_feats[1].shape
- # print(N,C,H,W)
- loss_cw += F.kl_div(F.log_softmax(s_feats[1].view(N,C,H*W)/temperature, dim=2),
- F.log_softmax(t_feats[1].view(N,C,H*W).detach()/temperature, dim=2),
- reduction='sum',
- log_target=True) * (temperature * temperature)/ (N*C)
-
- N,C,H,W = s_feats[2].shape
- # print(N,C,H,W)
- loss_cw += F.kl_div(F.log_softmax(s_feats[2].view(N,C,H*W)/temperature, dim=2),
- F.log_softmax(t_feats[2].view(N,C,H*W).detach()/temperature, dim=2),
- reduction='sum',
- log_target=True) * (temperature * temperature)/ (N*C)
- # print(loss_cw)
- return loss_cw
-
- def preprocess(self, targets, batch_size, scale_tensor):
- targets_list = np.zeros((batch_size, 1, 5)).tolist()
- for i, item in enumerate(targets.cpu().numpy().tolist()):
- targets_list[int(item[0])].append(item[1:])
- max_len = max((len(l) for l in targets_list))
- targets = torch.from_numpy(np.array(list(map(lambda l:l + [[-1,0,0,0,0]]*(max_len - len(l)), targets_list)))[:,1:,:]).to(targets.device)
- batch_target = targets[:, :, 1:5].mul_(scale_tensor)
- targets[..., 1:] = xywh2xyxy(batch_target)
- return targets
-
- def bbox_decode(self, anchor_points, pred_dist):
- if self.use_dfl:
- batch_size, n_anchors, _ = pred_dist.shape
- pred_dist = F.softmax(pred_dist.view(batch_size, n_anchors, 4, self.reg_max + 1), dim=-1).matmul(self.proj.to(pred_dist.device))
- return dist2bbox(pred_dist, anchor_points)
-
-
-class VarifocalLoss(nn.Module):
- def __init__(self):
- super(VarifocalLoss, self).__init__()
-
- def forward(self, pred_score,gt_score, label, alpha=0.75, gamma=2.0):
-
- weight = alpha * pred_score.pow(gamma) * (1 - label) + gt_score * label
- with torch.cuda.amp.autocast(enabled=False):
- loss = (F.binary_cross_entropy(pred_score.float(), gt_score.float(), reduction='none') * weight).sum()
-
- return loss
-
-
-class BboxLoss(nn.Module):
- def __init__(self, num_classes, reg_max, use_dfl=False, iou_type='giou'):
- super(BboxLoss, self).__init__()
- self.num_classes = num_classes
- self.iou_loss = IOUloss(box_format='xyxy', iou_type=iou_type, eps=1e-10)
- self.reg_max = reg_max
- self.use_dfl = use_dfl
-
- def forward(self, pred_dist, pred_bboxes, t_pred_dist, t_pred_bboxes, temperature, anchor_points,
- target_bboxes, target_scores, target_scores_sum, fg_mask):
- # select positive samples mask
- num_pos = fg_mask.sum()
- if num_pos > 0:
- # iou loss
- bbox_mask = fg_mask.unsqueeze(-1).repeat([1, 1, 4])
- pred_bboxes_pos = torch.masked_select(pred_bboxes,
- bbox_mask).reshape([-1, 4])
- t_pred_bboxes_pos = torch.masked_select(t_pred_bboxes,
- bbox_mask).reshape([-1, 4])
- target_bboxes_pos = torch.masked_select(
- target_bboxes, bbox_mask).reshape([-1, 4])
- bbox_weight = torch.masked_select(
- target_scores.sum(-1), fg_mask).unsqueeze(-1)
- loss_iou = self.iou_loss(pred_bboxes_pos,
- target_bboxes_pos) * bbox_weight
- if target_scores_sum == 0:
- loss_iou = loss_iou.sum()
- else:
- loss_iou = loss_iou.sum() / target_scores_sum
-
- # dfl loss
- if self.use_dfl:
- dist_mask = fg_mask.unsqueeze(-1).repeat(
- [1, 1, (self.reg_max + 1) * 4])
- pred_dist_pos = torch.masked_select(
- pred_dist, dist_mask).reshape([-1, 4, self.reg_max + 1])
- t_pred_dist_pos = torch.masked_select(
- t_pred_dist, dist_mask).reshape([-1, 4, self.reg_max + 1])
- target_ltrb = bbox2dist(anchor_points, target_bboxes, self.reg_max)
- target_ltrb_pos = torch.masked_select(
- target_ltrb, bbox_mask).reshape([-1, 4])
- loss_dfl = self._df_loss(pred_dist_pos,
- target_ltrb_pos) * bbox_weight
- d_loss_dfl = self.distill_loss_dfl(pred_dist_pos, t_pred_dist_pos, temperature) * bbox_weight
- if target_scores_sum == 0:
- loss_dfl = loss_dfl.sum()
- d_loss_dfl = d_loss_dfl.sum()
- else:
- loss_dfl = loss_dfl.sum() / target_scores_sum
- d_loss_dfl = d_loss_dfl.sum() / target_scores_sum
- else:
- loss_dfl = pred_dist.sum() * 0.
- d_loss_dfl = pred_dist.sum() * 0.
-
- else:
-
- loss_iou = pred_dist.sum() * 0.
- loss_dfl = pred_dist.sum() * 0.
- d_loss_dfl = pred_dist.sum() * 0.
-
- return loss_iou, loss_dfl, d_loss_dfl
-
- def _df_loss(self, pred_dist, target):
- target_left = target.to(torch.long)
- target_right = target_left + 1
- weight_left = target_right.to(torch.float) - target
- weight_right = 1 - weight_left
- loss_left = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_left.view(-1), reduction='none').view(
- target_left.shape) * weight_left
- loss_right = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_right.view(-1), reduction='none').view(
- target_left.shape) * weight_right
- return (loss_left + loss_right).mean(-1, keepdim=True)
-
- def distill_loss_dfl(self, logits_student, logits_teacher, temperature=20):
-
- logits_student = logits_student.view(-1,17)
- logits_teacher = logits_teacher.view(-1,17)
- pred_student = F.softmax(logits_student / temperature, dim=1)
- pred_teacher = F.softmax(logits_teacher / temperature, dim=1)
- log_pred_student = torch.log(pred_student)
-
- d_loss_dfl = F.kl_div(log_pred_student, pred_teacher, reduction="none").sum(1).mean()
- d_loss_dfl *= temperature**2
- return d_loss_dfl
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_distill_ns.py b/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_distill_ns.py
deleted file mode 100644
index 9a5ba9b7860e7c5bfb877fb7542b4988d2148ae0..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_distill_ns.py
+++ /dev/null
@@ -1,350 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-import torch
-import torch.nn as nn
-import numpy as np
-import torch.nn.functional as F
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox, bbox2dist, xywh2xyxy
-from yolov6.utils.figure_iou import IOUloss
-from yolov6.assigners.atss_assigner import ATSSAssigner
-from yolov6.assigners.tal_assigner import TaskAlignedAssigner
-
-
-class ComputeLoss:
- '''Loss computation func.'''
- def __init__(self,
- fpn_strides=[8, 16, 32],
- grid_cell_size=5.0,
- grid_cell_offset=0.5,
- num_classes=80,
- ori_img_size=640,
- warmup_epoch=0,
- use_dfl=True,
- reg_max=16,
- iou_type='giou',
- loss_weight={
- 'class': 1.0,
- 'iou': 2.5,
- 'dfl': 0.5,
- 'cwd': 10.0},
- distill_feat = False,
- distill_weight={
- 'class': 1.0,
- 'dfl': 1.0,
- }
- ):
-
- self.fpn_strides = fpn_strides
- self.grid_cell_size = grid_cell_size
- self.grid_cell_offset = grid_cell_offset
- self.num_classes = num_classes
- self.ori_img_size = ori_img_size
-
- self.warmup_epoch = warmup_epoch
- self.formal_assigner = TaskAlignedAssigner(topk=13, num_classes=self.num_classes, alpha=1.0, beta=6.0)
-
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.iou_type = iou_type
- self.varifocal_loss = VarifocalLoss().cuda()
- self.bbox_loss = BboxLoss(self.num_classes, self.reg_max, self.use_dfl, self.iou_type).cuda()
- self.loss_weight = loss_weight
- self.distill_feat = distill_feat
- self.distill_weight = distill_weight
-
- def __call__(
- self,
- outputs,
- t_outputs,
- s_featmaps,
- t_featmaps,
- targets,
- epoch_num,
- max_epoch,
- temperature,
- step_num,
- batch_height,
- batch_width
- ):
-
- feats, pred_scores, pred_distri, pred_lrtb = outputs
- t_feats, t_pred_scores, t_pred_distri = t_outputs[0], t_outputs[-2], t_outputs[-1]
- anchors, anchor_points, n_anchors_list, stride_tensor = \
- generate_anchors(feats, self.fpn_strides, self.grid_cell_size, self.grid_cell_offset, device=feats[0].device)
- t_anchors, t_anchor_points, t_n_anchors_list, t_stride_tensor = \
- generate_anchors(t_feats, self.fpn_strides, self.grid_cell_size, self.grid_cell_offset, device=feats[0].device)
-
- assert pred_scores.type() == pred_distri.type()
- gt_bboxes_scale = torch.tensor([batch_width, batch_height, batch_width, batch_height]).type_as(pred_scores)
- batch_size = pred_scores.shape[0]
-
- # targets
- targets =self.preprocess(targets, batch_size, gt_bboxes_scale)
- gt_labels = targets[:, :, :1]
- gt_bboxes = targets[:, :, 1:] #xyxy
- mask_gt = (gt_bboxes.sum(-1, keepdim=True) > 0).float()
-
- # pboxes
- anchor_points_s = anchor_points / stride_tensor
- pred_bboxes = self.bbox_decode(anchor_points_s, pred_distri) #xyxy #distri branch
- pred_bboxes_lrtb = dist2bbox(pred_lrtb, anchor_points_s) #iou branch
- t_anchor_points_s = t_anchor_points / t_stride_tensor
- t_pred_bboxes = self.bbox_decode(t_anchor_points_s, t_pred_distri) #xyxy
- try:
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- pred_scores.detach(),
- pred_bboxes.detach() * stride_tensor,
- anchor_points,
- gt_labels,
- gt_bboxes,
- mask_gt)
-
- except RuntimeError:
- print(
- "OOM RuntimeError is raised due to the huge memory cost during label assignment. \
- CPU mode is applied in this batch. If you want to avoid this issue, \
- try to reduce the batch size or image size."
- )
- torch.cuda.empty_cache()
- print("------------CPU Mode for This Batch-------------")
- _pred_scores = pred_scores.detach().cpu().float()
- _pred_bboxes = pred_bboxes.detach().cpu().float()
- _anchor_points = anchor_points.cpu().float()
- _gt_labels = gt_labels.cpu().float()
- _gt_bboxes = gt_bboxes.cpu().float()
- _mask_gt = mask_gt.cpu().float()
- _stride_tensor = stride_tensor.cpu().float()
-
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- _pred_scores,
- _pred_bboxes * _stride_tensor,
- _anchor_points,
- _gt_labels,
- _gt_bboxes,
- _mask_gt)
-
- target_labels = target_labels.cuda()
- target_bboxes = target_bboxes.cuda()
- target_scores = target_scores.cuda()
- fg_mask = fg_mask.cuda()
-
- #Dynamic release GPU memory
- if step_num % 10 == 0:
- torch.cuda.empty_cache()
-
- # rescale bbox
- target_bboxes /= stride_tensor
-
- # cls loss
- target_labels = torch.where(fg_mask > 0, target_labels, torch.full_like(target_labels, self.num_classes))
- one_hot_label = F.one_hot(target_labels.long(), self.num_classes + 1)[..., :-1]
- loss_cls = self.varifocal_loss(pred_scores, target_scores, one_hot_label)
-
- target_scores_sum = target_scores.sum()
- # avoid devide zero error, devide by zero will cause loss to be inf or nan.
- if target_scores_sum > 0:
- loss_cls /= target_scores_sum
-
- # bbox loss
- loss_iou, loss_dfl, d_loss_dfl = self.bbox_loss(pred_distri,
- pred_bboxes_lrtb,
- pred_bboxes,
- t_pred_distri,
- t_pred_bboxes,
- temperature,
- anchor_points_s,
- target_bboxes,
- target_scores,
- target_scores_sum,
- fg_mask)
-
- logits_student = pred_scores
- logits_teacher = t_pred_scores
- distill_num_classes = self.num_classes
- d_loss_cls = self.distill_loss_cls(logits_student, logits_teacher, distill_num_classes, temperature)
- if self.distill_feat:
- d_loss_cw = self.distill_loss_cw(s_featmaps, t_featmaps)
- else:
- d_loss_cw = torch.tensor(0.).to(feats[0].device)
- import math
- distill_weightdecay = ((1 - math.cos(epoch_num * math.pi / max_epoch)) / 2) * (0.01- 1) + 1
- d_loss_dfl *= distill_weightdecay
- d_loss_cls *= distill_weightdecay
- d_loss_cw *= distill_weightdecay
- loss_cls_all = loss_cls + d_loss_cls * self.distill_weight['class']
- loss_dfl_all = loss_dfl + d_loss_dfl * self.distill_weight['dfl']
- loss = self.loss_weight['class'] * loss_cls_all + \
- self.loss_weight['iou'] * loss_iou + \
- self.loss_weight['dfl'] * loss_dfl_all + \
- self.loss_weight['cwd'] * d_loss_cw
-
- return loss, \
- torch.cat(((self.loss_weight['iou'] * loss_iou).unsqueeze(0),
- (self.loss_weight['dfl'] * loss_dfl_all).unsqueeze(0),
- (self.loss_weight['class'] * loss_cls_all).unsqueeze(0),
- (self.loss_weight['cwd'] * d_loss_cw).unsqueeze(0))).detach()
-
- def distill_loss_cls(self, logits_student, logits_teacher, num_classes, temperature=20):
- logits_student = logits_student.view(-1, num_classes)
- logits_teacher = logits_teacher.view(-1, num_classes)
- pred_student = F.softmax(logits_student / temperature, dim=1)
- pred_teacher = F.softmax(logits_teacher / temperature, dim=1)
- log_pred_student = torch.log(pred_student)
-
- d_loss_cls = F.kl_div(log_pred_student, pred_teacher, reduction="sum")
- d_loss_cls *= temperature**2
- return d_loss_cls
-
- def distill_loss_cw(self, s_feats, t_feats, temperature=1):
- N,C,H,W = s_feats[0].shape
- # print(N,C,H,W)
- loss_cw = F.kl_div(F.log_softmax(s_feats[0].view(N,C,H*W)/temperature, dim=2),
- F.log_softmax(t_feats[0].view(N,C,H*W).detach()/temperature, dim=2),
- reduction='sum',
- log_target=True) * (temperature * temperature)/ (N*C)
-
- N,C,H,W = s_feats[1].shape
- # print(N,C,H,W)
- loss_cw += F.kl_div(F.log_softmax(s_feats[1].view(N,C,H*W)/temperature, dim=2),
- F.log_softmax(t_feats[1].view(N,C,H*W).detach()/temperature, dim=2),
- reduction='sum',
- log_target=True) * (temperature * temperature)/ (N*C)
-
- N,C,H,W = s_feats[2].shape
- # print(N,C,H,W)
- loss_cw += F.kl_div(F.log_softmax(s_feats[2].view(N,C,H*W)/temperature, dim=2),
- F.log_softmax(t_feats[2].view(N,C,H*W).detach()/temperature, dim=2),
- reduction='sum',
- log_target=True) * (temperature * temperature)/ (N*C)
- # print(loss_cw)
- return loss_cw
-
- def preprocess(self, targets, batch_size, scale_tensor):
- targets_list = np.zeros((batch_size, 1, 5)).tolist()
- for i, item in enumerate(targets.cpu().numpy().tolist()):
- targets_list[int(item[0])].append(item[1:])
- max_len = max((len(l) for l in targets_list))
- targets = torch.from_numpy(np.array(list(map(lambda l:l + [[-1,0,0,0,0]]*(max_len - len(l)), targets_list)))[:,1:,:]).to(targets.device)
- batch_target = targets[:, :, 1:5].mul_(scale_tensor)
- targets[..., 1:] = xywh2xyxy(batch_target)
- return targets
-
- def bbox_decode(self, anchor_points, pred_dist):
- if self.use_dfl:
- batch_size, n_anchors, _ = pred_dist.shape
- pred_dist = F.softmax(pred_dist.view(batch_size, n_anchors, 4, self.reg_max + 1), dim=-1).matmul(self.proj.to(pred_dist.device))
- return dist2bbox(pred_dist, anchor_points)
-
-
-class VarifocalLoss(nn.Module):
- def __init__(self):
- super(VarifocalLoss, self).__init__()
-
- def forward(self, pred_score,gt_score, label, alpha=0.75, gamma=2.0):
-
- weight = alpha * pred_score.pow(gamma) * (1 - label) + gt_score * label
- with torch.cuda.amp.autocast(enabled=False):
- loss = (F.binary_cross_entropy(pred_score.float(), gt_score.float(), reduction='none') * weight).sum()
-
- return loss
-
-
-class BboxLoss(nn.Module):
- def __init__(self, num_classes, reg_max, use_dfl=False, iou_type='giou'):
- super(BboxLoss, self).__init__()
- self.num_classes = num_classes
- self.iou_loss = IOUloss(box_format='xyxy', iou_type=iou_type, eps=1e-10)
- self.reg_max = reg_max
- self.use_dfl = use_dfl
-
- def forward(self, pred_dist, pred_bboxes_lrtb, pred_bboxes, t_pred_dist, t_pred_bboxes, temperature, anchor_points,
- target_bboxes, target_scores, target_scores_sum, fg_mask):
- # select positive samples mask
- num_pos = fg_mask.sum()
- if num_pos > 0:
- # iou loss
- bbox_mask = fg_mask.unsqueeze(-1).repeat([1, 1, 4])
- pred_bboxes_pos = torch.masked_select(pred_bboxes,
- bbox_mask).reshape([-1, 4])
- pred_bboxes_lrtb_pos = torch.masked_select(pred_bboxes_lrtb,
- bbox_mask).reshape([-1, 4])
- t_pred_bboxes_pos = torch.masked_select(t_pred_bboxes,
- bbox_mask).reshape([-1, 4])
- target_bboxes_pos = torch.masked_select(
- target_bboxes, bbox_mask).reshape([-1, 4])
- bbox_weight = torch.masked_select(
- target_scores.sum(-1), fg_mask).unsqueeze(-1)
- loss_iou = self.iou_loss(pred_bboxes_pos,
- target_bboxes_pos) * bbox_weight
- loss_iou_lrtb = self.iou_loss(pred_bboxes_lrtb_pos,
- target_bboxes_pos) * bbox_weight
-
- if target_scores_sum == 0:
- loss_iou = loss_iou.sum()
- loss_iou_lrtb = loss_iou_lrtb.sum()
- else:
- loss_iou = loss_iou.sum() / target_scores_sum
- loss_iou_lrtb = loss_iou_lrtb.sum() / target_scores_sum
-
- # dfl loss
- if self.use_dfl:
- dist_mask = fg_mask.unsqueeze(-1).repeat(
- [1, 1, (self.reg_max + 1) * 4])
- pred_dist_pos = torch.masked_select(
- pred_dist, dist_mask).reshape([-1, 4, self.reg_max + 1])
- t_pred_dist_pos = torch.masked_select(
- t_pred_dist, dist_mask).reshape([-1, 4, self.reg_max + 1])
- target_ltrb = bbox2dist(anchor_points, target_bboxes, self.reg_max)
- target_ltrb_pos = torch.masked_select(
- target_ltrb, bbox_mask).reshape([-1, 4])
- loss_dfl = self._df_loss(pred_dist_pos,
- target_ltrb_pos) * bbox_weight
- d_loss_dfl = self.distill_loss_dfl(pred_dist_pos, t_pred_dist_pos, temperature) * bbox_weight
- if target_scores_sum == 0:
- loss_dfl = loss_dfl.sum()
- d_loss_dfl = d_loss_dfl.sum()
- else:
- loss_dfl = loss_dfl.sum() / target_scores_sum
- d_loss_dfl = d_loss_dfl.sum() / target_scores_sum
- else:
- loss_dfl = pred_dist.sum() * 0.
- d_loss_dfl = pred_dist.sum() * 0.
-
- else:
-
- loss_iou = pred_dist.sum() * 0.
- loss_dfl = pred_dist.sum() * 0.
- d_loss_dfl = pred_dist.sum() * 0.
- loss_iou_lrtb = pred_dist.sum() * 0.
-
- return (loss_iou + loss_iou_lrtb), loss_dfl, d_loss_dfl
-
- def _df_loss(self, pred_dist, target):
- target_left = target.to(torch.long)
- target_right = target_left + 1
- weight_left = target_right.to(torch.float) - target
- weight_right = 1 - weight_left
- loss_left = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_left.view(-1), reduction='none').view(
- target_left.shape) * weight_left
- loss_right = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_right.view(-1), reduction='none').view(
- target_left.shape) * weight_right
- return (loss_left + loss_right).mean(-1, keepdim=True)
-
- def distill_loss_dfl(self, logits_student, logits_teacher, temperature=20):
-
- logits_student = logits_student.view(-1,17)
- logits_teacher = logits_teacher.view(-1,17)
- pred_student = F.softmax(logits_student / temperature, dim=1)
- pred_teacher = F.softmax(logits_teacher / temperature, dim=1)
- log_pred_student = torch.log(pred_student)
-
- d_loss_dfl = F.kl_div(log_pred_student, pred_teacher, reduction="none").sum(1).mean()
- d_loss_dfl *= temperature**2
- return d_loss_dfl
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_fuseab.py b/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_fuseab.py
deleted file mode 100644
index 4ae91f376a7bbaa37f31c59d0634ea73e056f78b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/losses/loss_fuseab.py
+++ /dev/null
@@ -1,243 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-import torch
-import torch.nn as nn
-import numpy as np
-import torch.nn.functional as F
-from yolov6.assigners.anchor_generator import generate_anchors
-from yolov6.utils.general import dist2bbox, bbox2dist, xywh2xyxy, box_iou
-from yolov6.utils.figure_iou import IOUloss
-from yolov6.assigners.tal_assigner import TaskAlignedAssigner
-
-
-class ComputeLoss:
- '''Loss computation func.'''
- def __init__(self,
- fpn_strides=[8, 16, 32],
- grid_cell_size=5.0,
- grid_cell_offset=0.5,
- num_classes=80,
- ori_img_size=640,
- warmup_epoch=0,
- use_dfl=True,
- reg_max=16,
- iou_type='giou',
- loss_weight={
- 'class': 1.0,
- 'iou': 2.5,
- 'dfl': 0.5},
- ):
-
- self.fpn_strides = fpn_strides
- self.grid_cell_size = grid_cell_size
- self.grid_cell_offset = grid_cell_offset
- self.num_classes = num_classes
- self.ori_img_size = ori_img_size
-
- self.warmup_epoch = warmup_epoch
- self.formal_assigner = TaskAlignedAssigner(topk=26, num_classes=self.num_classes, alpha=1.0, beta=6.0)
-
- self.use_dfl = use_dfl
- self.reg_max = reg_max
- self.proj = nn.Parameter(torch.linspace(0, self.reg_max, self.reg_max + 1), requires_grad=False)
- self.iou_type = iou_type
- self.varifocal_loss = VarifocalLoss().cuda()
- self.bbox_loss = BboxLoss(self.num_classes, self.reg_max, self.use_dfl, self.iou_type).cuda()
- self.loss_weight = loss_weight
-
- def __call__(
- self,
- outputs,
- targets,
- epoch_num,
- step_num,
- batch_height,
- batch_width
- ):
-
- feats, pred_scores, pred_distri = outputs
- anchors, anchor_points, n_anchors_list, stride_tensor = \
- generate_anchors(feats, self.fpn_strides, self.grid_cell_size, self.grid_cell_offset, device=feats[0].device, is_eval=False, mode='ab')
-
- assert pred_scores.type() == pred_distri.type()
- gt_bboxes_scale = torch.tensor([batch_width, batch_height, batch_width, batch_height]).type_as(pred_scores)
- batch_size = pred_scores.shape[0]
-
- # targets
- targets =self.preprocess(targets, batch_size, gt_bboxes_scale)
- gt_labels = targets[:, :, :1]
- gt_bboxes = targets[:, :, 1:] #xyxy
- mask_gt = (gt_bboxes.sum(-1, keepdim=True) > 0).float()
-
- # pboxes
- anchor_points_s = anchor_points / stride_tensor
- pred_distri[..., :2] += anchor_points_s
- pred_bboxes = xywh2xyxy(pred_distri)
-
- try:
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- pred_scores.detach(),
- pred_bboxes.detach() * stride_tensor,
- anchor_points,
- gt_labels,
- gt_bboxes,
- mask_gt)
-
- except RuntimeError:
- print(
- "OOM RuntimeError is raised due to the huge memory cost during label assignment. \
- CPU mode is applied in this batch. If you want to avoid this issue, \
- try to reduce the batch size or image size."
- )
- torch.cuda.empty_cache()
- print("------------CPU Mode for This Batch-------------")
-
- _pred_scores = pred_scores.detach().cpu().float()
- _pred_bboxes = pred_bboxes.detach().cpu().float()
- _anchor_points = anchor_points.cpu().float()
- _gt_labels = gt_labels.cpu().float()
- _gt_bboxes = gt_bboxes.cpu().float()
- _mask_gt = mask_gt.cpu().float()
- _stride_tensor = stride_tensor.cpu().float()
-
- target_labels, target_bboxes, target_scores, fg_mask = \
- self.formal_assigner(
- _pred_scores,
- _pred_bboxes * _stride_tensor,
- _anchor_points,
- _gt_labels,
- _gt_bboxes,
- _mask_gt)
-
- target_labels = target_labels.cuda()
- target_bboxes = target_bboxes.cuda()
- target_scores = target_scores.cuda()
- fg_mask = fg_mask.cuda()
- #Dynamic release GPU memory
- if step_num % 10 == 0:
- torch.cuda.empty_cache()
-
- # rescale bbox
- target_bboxes /= stride_tensor
-
- # cls loss
- target_labels = torch.where(fg_mask > 0, target_labels, torch.full_like(target_labels, self.num_classes))
- one_hot_label = F.one_hot(target_labels.long(), self.num_classes + 1)[..., :-1]
- loss_cls = self.varifocal_loss(pred_scores, target_scores, one_hot_label)
-
- target_scores_sum = target_scores.sum()
- # avoid devide zero error, devide by zero will cause loss to be inf or nan.
- # if target_scores_sum is 0, loss_cls equals to 0 alson
- if target_scores_sum > 0:
- loss_cls /= target_scores_sum
-
- # bbox loss
- loss_iou, loss_dfl = self.bbox_loss(pred_distri, pred_bboxes, anchor_points_s, target_bboxes,
- target_scores, target_scores_sum, fg_mask)
-
- loss = self.loss_weight['class'] * loss_cls + \
- self.loss_weight['iou'] * loss_iou + \
- self.loss_weight['dfl'] * loss_dfl
-
- return loss, \
- torch.cat(((self.loss_weight['iou'] * loss_iou).unsqueeze(0),
- (self.loss_weight['dfl'] * loss_dfl).unsqueeze(0),
- (self.loss_weight['class'] * loss_cls).unsqueeze(0))).detach()
-
- def preprocess(self, targets, batch_size, scale_tensor):
- targets_list = np.zeros((batch_size, 1, 5)).tolist()
- for i, item in enumerate(targets.cpu().numpy().tolist()):
- targets_list[int(item[0])].append(item[1:])
- max_len = max((len(l) for l in targets_list))
- targets = torch.from_numpy(np.array(list(map(lambda l:l + [[-1,0,0,0,0]]*(max_len - len(l)), targets_list)))[:,1:,:]).to(targets.device)
- batch_target = targets[:, :, 1:5].mul_(scale_tensor)
- targets[..., 1:] = xywh2xyxy(batch_target)
- return targets
-
- def bbox_decode(self, anchor_points, pred_dist):
- if self.use_dfl:
- batch_size, n_anchors, _ = pred_dist.shape
- pred_dist = F.softmax(pred_dist.view(batch_size, n_anchors, 4, self.reg_max + 1), dim=-1).matmul(self.proj.to(pred_dist.device))
- return dist2bbox(pred_dist, anchor_points)
-
-
-class VarifocalLoss(nn.Module):
- def __init__(self):
- super(VarifocalLoss, self).__init__()
-
- def forward(self, pred_score,gt_score, label, alpha=0.75, gamma=2.0):
-
- weight = alpha * pred_score.pow(gamma) * (1 - label) + gt_score * label
- with torch.cuda.amp.autocast(enabled=False):
- loss = (F.binary_cross_entropy(pred_score.float(), gt_score.float(), reduction='none') * weight).sum()
-
- return loss
-
-
-class BboxLoss(nn.Module):
- def __init__(self, num_classes, reg_max, use_dfl=False, iou_type='giou'):
- super(BboxLoss, self).__init__()
- self.num_classes = num_classes
- self.iou_loss = IOUloss(box_format='xyxy', iou_type=iou_type, eps=1e-10)
- self.reg_max = reg_max
- self.use_dfl = use_dfl
-
- def forward(self, pred_dist, pred_bboxes, anchor_points,
- target_bboxes, target_scores, target_scores_sum, fg_mask):
-
- # select positive samples mask
- num_pos = fg_mask.sum()
- if num_pos > 0:
- # iou loss
- bbox_mask = fg_mask.unsqueeze(-1).repeat([1, 1, 4])
- pred_bboxes_pos = torch.masked_select(pred_bboxes,
- bbox_mask).reshape([-1, 4])
- target_bboxes_pos = torch.masked_select(
- target_bboxes, bbox_mask).reshape([-1, 4])
- bbox_weight = torch.masked_select(
- target_scores.sum(-1), fg_mask).unsqueeze(-1)
- loss_iou = self.iou_loss(pred_bboxes_pos,
- target_bboxes_pos) * bbox_weight
- if target_scores_sum == 0:
- loss_iou = loss_iou.sum()
- else:
- loss_iou = loss_iou.sum() / target_scores_sum
-
- # dfl loss
- if self.use_dfl:
- dist_mask = fg_mask.unsqueeze(-1).repeat(
- [1, 1, (self.reg_max + 1) * 4])
- pred_dist_pos = torch.masked_select(
- pred_dist, dist_mask).reshape([-1, 4, self.reg_max + 1])
- target_ltrb = bbox2dist(anchor_points, target_bboxes, self.reg_max)
- target_ltrb_pos = torch.masked_select(
- target_ltrb, bbox_mask).reshape([-1, 4])
- loss_dfl = self._df_loss(pred_dist_pos,
- target_ltrb_pos) * bbox_weight
- if target_scores_sum == 0:
- loss_dfl = loss_dfl.sum()
- else:
- loss_dfl = loss_dfl.sum() / target_scores_sum
- else:
- loss_dfl = pred_dist.sum() * 0.
-
- else:
- loss_iou = pred_dist.sum() * 0.
- loss_dfl = pred_dist.sum() * 0.
-
- return loss_iou, loss_dfl
-
- def _df_loss(self, pred_dist, target):
- target_left = target.to(torch.long)
- target_right = target_left + 1
- weight_left = target_right.to(torch.float) - target
- weight_right = 1 - weight_left
- loss_left = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_left.view(-1), reduction='none').view(
- target_left.shape) * weight_left
- loss_right = F.cross_entropy(
- pred_dist.view(-1, self.reg_max + 1), target_right.view(-1), reduction='none').view(
- target_left.shape) * weight_right
- return (loss_left + loss_right).mean(-1, keepdim=True)
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/reppan.py b/cv/detection/yolov6/pytorch/yolov6/models/reppan.py
deleted file mode 100644
index 2114f52120a0690f0da76c6eec7125bf06515bd6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/reppan.py
+++ /dev/null
@@ -1,1226 +0,0 @@
-import torch
-from torch import nn
-from yolov6.layers.common import RepBlock, RepVGGBlock, BottleRep, BepC3, ConvBNReLU, Transpose, BiFusion, \
- MBLABlock, ConvBNHS, CSPBlock, DPBlock
-
-# _QUANT=False
-class RepPANNeck(nn.Module):
- """RepPANNeck Module
- EfficientRep is the default backbone of this model.
- RepPANNeck has the balance of feature fusion ability and hardware efficiency.
- """
-
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- self.Rep_p4 = RepBlock(
- in_channels=channels_list[3] + channels_list[5],
- out_channels=channels_list[5],
- n=num_repeats[5],
- block=block
- )
-
- self.Rep_p3 = RepBlock(
- in_channels=channels_list[2] + channels_list[6],
- out_channels=channels_list[6],
- n=num_repeats[6],
- block=block
- )
-
- self.Rep_n3 = RepBlock(
- in_channels=channels_list[6] + channels_list[7],
- out_channels=channels_list[8],
- n=num_repeats[7],
- block=block
- )
-
- self.Rep_n4 = RepBlock(
- in_channels=channels_list[5] + channels_list[9],
- out_channels=channels_list[10],
- n=num_repeats[8],
- block=block
- )
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[4],
- out_channels=channels_list[5],
- kernel_size=1,
- stride=1
- )
-
- self.upsample0 = Transpose(
- in_channels=channels_list[5],
- out_channels=channels_list[5],
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[5],
- out_channels=channels_list[6],
- kernel_size=1,
- stride=1
- )
-
- self.upsample1 = Transpose(
- in_channels=channels_list[6],
- out_channels=channels_list[6]
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[6],
- out_channels=channels_list[7],
- kernel_size=3,
- stride=2
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[8],
- out_channels=channels_list[9],
- kernel_size=3,
- stride=2
- )
-
- def upsample_enable_quant(self, num_bits, calib_method):
- print("Insert fakequant after upsample")
- # Insert fakequant after upsample op to build TensorRT engine
- from pytorch_quantization import nn as quant_nn
- from pytorch_quantization.tensor_quant import QuantDescriptor
- conv2d_input_default_desc = QuantDescriptor(num_bits=num_bits, calib_method=calib_method)
- self.upsample_feat0_quant = quant_nn.TensorQuantizer(conv2d_input_default_desc)
- self.upsample_feat1_quant = quant_nn.TensorQuantizer(conv2d_input_default_desc)
- # global _QUANT
- self._QUANT = True
-
- def forward(self, input):
-
- (x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- upsample_feat0 = self.upsample0(fpn_out0)
- if hasattr(self, '_QUANT') and self._QUANT is True:
- upsample_feat0 = self.upsample_feat0_quant(upsample_feat0)
- f_concat_layer0 = torch.cat([upsample_feat0, x1], 1)
- f_out0 = self.Rep_p4(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- upsample_feat1 = self.upsample1(fpn_out1)
- if hasattr(self, '_QUANT') and self._QUANT is True:
- upsample_feat1 = self.upsample_feat1_quant(upsample_feat1)
- f_concat_layer1 = torch.cat([upsample_feat1, x2], 1)
- pan_out2 = self.Rep_p3(f_concat_layer1)
-
- down_feat1 = self.downsample2(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n3(p_concat_layer1)
-
- down_feat0 = self.downsample1(pan_out1)
- p_concat_layer2 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n4(p_concat_layer2)
-
- outputs = [pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class RepBiFPANNeck(nn.Module):
- """RepBiFPANNeck Module
- """
- # [64, 128, 256, 512, 1024]
- # [256, 128, 128, 256, 256, 512]
-
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[4], # 1024
- out_channels=channels_list[5], # 256
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion0 = BiFusion(
- in_channels=[channels_list[3], channels_list[2]], # 512, 256
- out_channels=channels_list[5], # 256
- )
- self.Rep_p4 = RepBlock(
- in_channels=channels_list[5], # 256
- out_channels=channels_list[5], # 256
- n=num_repeats[5],
- block=block
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[5], # 256
- out_channels=channels_list[6], # 128
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion1 = BiFusion(
- in_channels=[channels_list[2], channels_list[1]], # 256, 128
- out_channels=channels_list[6], # 128
- )
-
- self.Rep_p3 = RepBlock(
- in_channels=channels_list[6], # 128
- out_channels=channels_list[6], # 128
- n=num_repeats[6],
- block=block
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[6], # 128
- out_channels=channels_list[7], # 128
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n3 = RepBlock(
- in_channels=channels_list[6] + channels_list[7], # 128 + 128
- out_channels=channels_list[8], # 256
- n=num_repeats[7],
- block=block
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[8], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n4 = RepBlock(
- in_channels=channels_list[5] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[8],
- block=block
- )
-
-
- def forward(self, input):
-
- (x3, x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- f_concat_layer0 = self.Bifusion0([fpn_out0, x1, x2])
- f_out0 = self.Rep_p4(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- f_concat_layer1 = self.Bifusion1([fpn_out1, x2, x3])
- pan_out2 = self.Rep_p3(f_concat_layer1)
-
- down_feat1 = self.downsample2(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n3(p_concat_layer1)
-
- down_feat0 = self.downsample1(pan_out1)
- p_concat_layer2 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n4(p_concat_layer2)
-
- outputs = [pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class RepPANNeck6(nn.Module):
- """RepPANNeck+P6 Module
- EfficientRep is the default backbone of this model.
- RepPANNeck has the balance of feature fusion ability and hardware efficiency.
- """
- # [64, 128, 256, 512, 768, 1024]
- # [512, 256, 128, 256, 512, 1024]
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[5], # 1024
- out_channels=channels_list[6], # 512
- kernel_size=1,
- stride=1
- )
-
- self.upsample0 = Transpose(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[6], # 512
- )
-
- self.Rep_p5 = RepBlock(
- in_channels=channels_list[4] + channels_list[6], # 768 + 512
- out_channels=channels_list[6], # 512
- n=num_repeats[6],
- block=block
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[7], # 256
- kernel_size=1,
- stride=1
- )
-
- self.upsample1 = Transpose(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[7] # 256
- )
-
- self.Rep_p4 = RepBlock(
- in_channels=channels_list[3] + channels_list[7], # 512 + 256
- out_channels=channels_list[7], # 256
- n=num_repeats[7],
- block=block
- )
-
- self.reduce_layer2 = ConvBNReLU(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[8], # 128
- kernel_size=1,
- stride=1
- )
-
- self.upsample2 = Transpose(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8] # 128
- )
-
- self.Rep_p3 = RepBlock(
- in_channels=channels_list[2] + channels_list[8], # 256 + 128
- out_channels=channels_list[8], # 128
- n=num_repeats[8],
- block=block
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8], # 128
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n4 = RepBlock(
- in_channels=channels_list[8] + channels_list[8], # 128 + 128
- out_channels=channels_list[9], # 256
- n=num_repeats[9],
- block=block
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[9], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n5 = RepBlock(
- in_channels=channels_list[7] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[10],
- block=block
- )
-
- self.downsample0 = ConvBNReLU(
- in_channels=channels_list[10], # 512
- out_channels=channels_list[10], # 512
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n6 = RepBlock(
- in_channels=channels_list[6] + channels_list[10], # 512 + 512
- out_channels=channels_list[11], # 1024
- n=num_repeats[11],
- block=block
- )
-
-
- def forward(self, input):
-
- (x3, x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- upsample_feat0 = self.upsample0(fpn_out0)
- f_concat_layer0 = torch.cat([upsample_feat0, x1], 1)
- f_out0 = self.Rep_p5(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- upsample_feat1 = self.upsample1(fpn_out1)
- f_concat_layer1 = torch.cat([upsample_feat1, x2], 1)
- f_out1 = self.Rep_p4(f_concat_layer1)
-
- fpn_out2 = self.reduce_layer2(f_out1)
- upsample_feat2 = self.upsample2(fpn_out2)
- f_concat_layer2 = torch.cat([upsample_feat2, x3], 1)
- pan_out3 = self.Rep_p3(f_concat_layer2) # P3
-
- down_feat2 = self.downsample2(pan_out3)
- p_concat_layer2 = torch.cat([down_feat2, fpn_out2], 1)
- pan_out2 = self.Rep_n4(p_concat_layer2) # P4
-
- down_feat1 = self.downsample1(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n5(p_concat_layer1) # P5
-
- down_feat0 = self.downsample0(pan_out1)
- p_concat_layer0 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n6(p_concat_layer0) # P6
-
- outputs = [pan_out3, pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class RepBiFPANNeck6(nn.Module):
- """RepBiFPANNeck_P6 Module
- """
- # [64, 128, 256, 512, 768, 1024]
- # [512, 256, 128, 256, 512, 1024]
-
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=RepVGGBlock
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[5], # 1024
- out_channels=channels_list[6], # 512
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion0 = BiFusion(
- in_channels=[channels_list[4], channels_list[6]], # 768, 512
- out_channels=channels_list[6], # 512
- )
-
- self.Rep_p5 = RepBlock(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[6], # 512
- n=num_repeats[6],
- block=block
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[7], # 256
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion1 = BiFusion(
- in_channels=[channels_list[3], channels_list[7]], # 512, 256
- out_channels=channels_list[7], # 256
- )
-
- self.Rep_p4 = RepBlock(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[7], # 256
- n=num_repeats[7],
- block=block
- )
-
- self.reduce_layer2 = ConvBNReLU(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[8], # 128
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion2 = BiFusion(
- in_channels=[channels_list[2], channels_list[8]], # 256, 128
- out_channels=channels_list[8], # 128
- )
-
- self.Rep_p3 = RepBlock(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8], # 128
- n=num_repeats[8],
- block=block
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8], # 128
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n4 = RepBlock(
- in_channels=channels_list[8] + channels_list[8], # 128 + 128
- out_channels=channels_list[9], # 256
- n=num_repeats[9],
- block=block
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[9], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n5 = RepBlock(
- in_channels=channels_list[7] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[10],
- block=block
- )
-
- self.downsample0 = ConvBNReLU(
- in_channels=channels_list[10], # 512
- out_channels=channels_list[10], # 512
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n6 = RepBlock(
- in_channels=channels_list[6] + channels_list[10], # 512 + 512
- out_channels=channels_list[11], # 1024
- n=num_repeats[11],
- block=block
- )
-
-
- def forward(self, input):
-
- (x4, x3, x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- f_concat_layer0 = self.Bifusion0([fpn_out0, x1, x2])
- f_out0 = self.Rep_p5(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- f_concat_layer1 = self.Bifusion1([fpn_out1, x2, x3])
- f_out1 = self.Rep_p4(f_concat_layer1)
-
- fpn_out2 = self.reduce_layer2(f_out1)
- f_concat_layer2 = self.Bifusion2([fpn_out2, x3, x4])
- pan_out3 = self.Rep_p3(f_concat_layer2) # P3
-
- down_feat2 = self.downsample2(pan_out3)
- p_concat_layer2 = torch.cat([down_feat2, fpn_out2], 1)
- pan_out2 = self.Rep_n4(p_concat_layer2) # P4
-
- down_feat1 = self.downsample1(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n5(p_concat_layer1) # P5
-
- down_feat0 = self.downsample0(pan_out1)
- p_concat_layer0 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n6(p_concat_layer0) # P6
-
- outputs = [pan_out3, pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class CSPRepPANNeck(nn.Module):
- """
- CSPRepPANNeck module.
- """
-
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=BottleRep,
- csp_e=float(1)/2,
- stage_block_type="BepC3"
- ):
- super().__init__()
-
- if stage_block_type == "BepC3":
- stage_block = BepC3
- elif stage_block_type == "MBLABlock":
- stage_block = MBLABlock
- else:
- raise NotImplementedError
-
- assert channels_list is not None
- assert num_repeats is not None
-
- self.Rep_p4 = stage_block(
- in_channels=channels_list[3] + channels_list[5], # 512 + 256
- out_channels=channels_list[5], # 256
- n=num_repeats[5],
- e=csp_e,
- block=block
- )
-
- self.Rep_p3 = stage_block(
- in_channels=channels_list[2] + channels_list[6], # 256 + 128
- out_channels=channels_list[6], # 128
- n=num_repeats[6],
- e=csp_e,
- block=block
- )
-
- self.Rep_n3 = stage_block(
- in_channels=channels_list[6] + channels_list[7], # 128 + 128
- out_channels=channels_list[8], # 256
- n=num_repeats[7],
- e=csp_e,
- block=block
- )
-
- self.Rep_n4 = stage_block(
- in_channels=channels_list[5] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[8],
- e=csp_e,
- block=block
- )
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[4], # 1024
- out_channels=channels_list[5], # 256
- kernel_size=1,
- stride=1
- )
-
- self.upsample0 = Transpose(
- in_channels=channels_list[5], # 256
- out_channels=channels_list[5], # 256
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[5], # 256
- out_channels=channels_list[6], # 128
- kernel_size=1,
- stride=1
- )
-
- self.upsample1 = Transpose(
- in_channels=channels_list[6], # 128
- out_channels=channels_list[6] # 128
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[6], # 128
- out_channels=channels_list[7], # 128
- kernel_size=3,
- stride=2
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[8], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
- def forward(self, input):
-
- (x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- upsample_feat0 = self.upsample0(fpn_out0)
- f_concat_layer0 = torch.cat([upsample_feat0, x1], 1)
- f_out0 = self.Rep_p4(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- upsample_feat1 = self.upsample1(fpn_out1)
- f_concat_layer1 = torch.cat([upsample_feat1, x2], 1)
- pan_out2 = self.Rep_p3(f_concat_layer1)
-
- down_feat1 = self.downsample2(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n3(p_concat_layer1)
-
- down_feat0 = self.downsample1(pan_out1)
- p_concat_layer2 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n4(p_concat_layer2)
-
- outputs = [pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class CSPRepBiFPANNeck(nn.Module):
- """
- CSPRepBiFPANNeck module.
- """
-
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=BottleRep,
- csp_e=float(1)/2,
- stage_block_type="BepC3"
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- if stage_block_type == "BepC3":
- stage_block = BepC3
- elif stage_block_type == "MBLABlock":
- stage_block = MBLABlock
- else:
- raise NotImplementedError
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[4], # 1024
- out_channels=channels_list[5], # 256
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion0 = BiFusion(
- in_channels=[channels_list[3], channels_list[2]], # 512, 256
- out_channels=channels_list[5], # 256
- )
-
- self.Rep_p4 = stage_block(
- in_channels=channels_list[5], # 256
- out_channels=channels_list[5], # 256
- n=num_repeats[5],
- e=csp_e,
- block=block
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[5], # 256
- out_channels=channels_list[6], # 128
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion1 = BiFusion(
- in_channels=[channels_list[2], channels_list[1]], # 256, 128
- out_channels=channels_list[6], # 128
- )
-
- self.Rep_p3 = stage_block(
- in_channels=channels_list[6], # 128
- out_channels=channels_list[6], # 128
- n=num_repeats[6],
- e=csp_e,
- block=block
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[6], # 128
- out_channels=channels_list[7], # 128
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n3 = stage_block(
- in_channels=channels_list[6] + channels_list[7], # 128 + 128
- out_channels=channels_list[8], # 256
- n=num_repeats[7],
- e=csp_e,
- block=block
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[8], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
-
- self.Rep_n4 = stage_block(
- in_channels=channels_list[5] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[8],
- e=csp_e,
- block=block
- )
-
-
- def forward(self, input):
-
- (x3, x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- f_concat_layer0 = self.Bifusion0([fpn_out0, x1, x2])
- f_out0 = self.Rep_p4(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- f_concat_layer1 = self.Bifusion1([fpn_out1, x2, x3])
- pan_out2 = self.Rep_p3(f_concat_layer1)
-
- down_feat1 = self.downsample2(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n3(p_concat_layer1)
-
- down_feat0 = self.downsample1(pan_out1)
- p_concat_layer2 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n4(p_concat_layer2)
-
- outputs = [pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class CSPRepPANNeck_P6(nn.Module):
- """CSPRepPANNeck_P6 Module
- """
- # [64, 128, 256, 512, 768, 1024]
- # [512, 256, 128, 256, 512, 1024]
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=BottleRep,
- csp_e=float(1)/2,
- stage_block_type="BepC3"
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- if stage_block_type == "BepC3":
- stage_block = BepC3
- elif stage_block_type == "MBLABlock":
- stage_block = MBLABlock
- else:
- raise NotImplementedError
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[5], # 1024
- out_channels=channels_list[6], # 512
- kernel_size=1,
- stride=1
- )
-
- self.upsample0 = Transpose(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[6], # 512
- )
-
- self.Rep_p5 = stage_block(
- in_channels=channels_list[4] + channels_list[6], # 768 + 512
- out_channels=channels_list[6], # 512
- n=num_repeats[6],
- e=csp_e,
- block=block
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[7], # 256
- kernel_size=1,
- stride=1
- )
-
- self.upsample1 = Transpose(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[7] # 256
- )
-
- self.Rep_p4 = stage_block(
- in_channels=channels_list[3] + channels_list[7], # 512 + 256
- out_channels=channels_list[7], # 256
- n=num_repeats[7],
- e=csp_e,
- block=block
- )
-
- self.reduce_layer2 = ConvBNReLU(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[8], # 128
- kernel_size=1,
- stride=1
- )
-
- self.upsample2 = Transpose(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8] # 128
- )
-
- self.Rep_p3 = stage_block(
- in_channels=channels_list[2] + channels_list[8], # 256 + 128
- out_channels=channels_list[8], # 128
- n=num_repeats[8],
- e=csp_e,
- block=block
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8], # 128
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n4 = stage_block(
- in_channels=channels_list[8] + channels_list[8], # 128 + 128
- out_channels=channels_list[9], # 256
- n=num_repeats[9],
- e=csp_e,
- block=block
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[9], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n5 = stage_block(
- in_channels=channels_list[7] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[10],
- e=csp_e,
- block=block
- )
-
- self.downsample0 = ConvBNReLU(
- in_channels=channels_list[10], # 512
- out_channels=channels_list[10], # 512
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n6 = stage_block(
- in_channels=channels_list[6] + channels_list[10], # 512 + 512
- out_channels=channels_list[11], # 1024
- n=num_repeats[11],
- e=csp_e,
- block=block
- )
-
-
- def forward(self, input):
-
- (x3, x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- upsample_feat0 = self.upsample0(fpn_out0)
- f_concat_layer0 = torch.cat([upsample_feat0, x1], 1)
- f_out0 = self.Rep_p5(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- upsample_feat1 = self.upsample1(fpn_out1)
- f_concat_layer1 = torch.cat([upsample_feat1, x2], 1)
- f_out1 = self.Rep_p4(f_concat_layer1)
-
- fpn_out2 = self.reduce_layer2(f_out1)
- upsample_feat2 = self.upsample2(fpn_out2)
- f_concat_layer2 = torch.cat([upsample_feat2, x3], 1)
- pan_out3 = self.Rep_p3(f_concat_layer2) # P3
-
- down_feat2 = self.downsample2(pan_out3)
- p_concat_layer2 = torch.cat([down_feat2, fpn_out2], 1)
- pan_out2 = self.Rep_n4(p_concat_layer2) # P4
-
- down_feat1 = self.downsample1(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n5(p_concat_layer1) # P5
-
- down_feat0 = self.downsample0(pan_out1)
- p_concat_layer0 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n6(p_concat_layer0) # P6
-
- outputs = [pan_out3, pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-
-class CSPRepBiFPANNeck_P6(nn.Module):
- """CSPRepBiFPANNeck_P6 Module
- """
- # [64, 128, 256, 512, 768, 1024]
- # [512, 256, 128, 256, 512, 1024]
- def __init__(
- self,
- channels_list=None,
- num_repeats=None,
- block=BottleRep,
- csp_e=float(1)/2,
- stage_block_type="BepC3"
- ):
- super().__init__()
-
- assert channels_list is not None
- assert num_repeats is not None
-
- if stage_block_type == "BepC3":
- stage_block = BepC3
- elif stage_block_type == "MBLABlock":
- stage_block = MBLABlock
- else:
- raise NotImplementedError
-
- self.reduce_layer0 = ConvBNReLU(
- in_channels=channels_list[5], # 1024
- out_channels=channels_list[6], # 512
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion0 = BiFusion(
- in_channels=[channels_list[4], channels_list[6]], # 768, 512
- out_channels=channels_list[6], # 512
- )
-
- self.Rep_p5 = stage_block(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[6], # 512
- n=num_repeats[6],
- e=csp_e,
- block=block
- )
-
- self.reduce_layer1 = ConvBNReLU(
- in_channels=channels_list[6], # 512
- out_channels=channels_list[7], # 256
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion1 = BiFusion(
- in_channels=[channels_list[3], channels_list[7]], # 512, 256
- out_channels=channels_list[7], # 256
- )
-
- self.Rep_p4 = stage_block(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[7], # 256
- n=num_repeats[7],
- e=csp_e,
- block=block
- )
-
- self.reduce_layer2 = ConvBNReLU(
- in_channels=channels_list[7], # 256
- out_channels=channels_list[8], # 128
- kernel_size=1,
- stride=1
- )
-
- self.Bifusion2 = BiFusion(
- in_channels=[channels_list[2], channels_list[8]], # 256, 128
- out_channels=channels_list[8], # 128
- )
-
- self.Rep_p3 = stage_block(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8], # 128
- n=num_repeats[8],
- e=csp_e,
- block=block
- )
-
- self.downsample2 = ConvBNReLU(
- in_channels=channels_list[8], # 128
- out_channels=channels_list[8], # 128
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n4 = stage_block(
- in_channels=channels_list[8] + channels_list[8], # 128 + 128
- out_channels=channels_list[9], # 256
- n=num_repeats[9],
- e=csp_e,
- block=block
- )
-
- self.downsample1 = ConvBNReLU(
- in_channels=channels_list[9], # 256
- out_channels=channels_list[9], # 256
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n5 = stage_block(
- in_channels=channels_list[7] + channels_list[9], # 256 + 256
- out_channels=channels_list[10], # 512
- n=num_repeats[10],
- e=csp_e,
- block=block
- )
-
- self.downsample0 = ConvBNReLU(
- in_channels=channels_list[10], # 512
- out_channels=channels_list[10], # 512
- kernel_size=3,
- stride=2
- )
-
- self.Rep_n6 = stage_block(
- in_channels=channels_list[6] + channels_list[10], # 512 + 512
- out_channels=channels_list[11], # 1024
- n=num_repeats[11],
- e=csp_e,
- block=block
- )
-
-
- def forward(self, input):
-
- (x4, x3, x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0)
- f_concat_layer0 = self.Bifusion0([fpn_out0, x1, x2])
- f_out0 = self.Rep_p5(f_concat_layer0)
-
- fpn_out1 = self.reduce_layer1(f_out0)
- f_concat_layer1 = self.Bifusion1([fpn_out1, x2, x3])
- f_out1 = self.Rep_p4(f_concat_layer1)
-
- fpn_out2 = self.reduce_layer2(f_out1)
- f_concat_layer2 = self.Bifusion2([fpn_out2, x3, x4])
- pan_out3 = self.Rep_p3(f_concat_layer2) # P3
-
- down_feat2 = self.downsample2(pan_out3)
- p_concat_layer2 = torch.cat([down_feat2, fpn_out2], 1)
- pan_out2 = self.Rep_n4(p_concat_layer2) # P4
-
- down_feat1 = self.downsample1(pan_out2)
- p_concat_layer1 = torch.cat([down_feat1, fpn_out1], 1)
- pan_out1 = self.Rep_n5(p_concat_layer1) # P5
-
- down_feat0 = self.downsample0(pan_out1)
- p_concat_layer0 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out0 = self.Rep_n6(p_concat_layer0) # P6
-
- outputs = [pan_out3, pan_out2, pan_out1, pan_out0]
-
- return outputs
-
-class Lite_EffiNeck(nn.Module):
-
- def __init__(
- self,
- in_channels,
- unified_channels,
- ):
- super().__init__()
- self.reduce_layer0 = ConvBNHS(
- in_channels=in_channels[0],
- out_channels=unified_channels,
- kernel_size=1,
- stride=1,
- padding=0
- )
- self.reduce_layer1 = ConvBNHS(
- in_channels=in_channels[1],
- out_channels=unified_channels,
- kernel_size=1,
- stride=1,
- padding=0
- )
- self.reduce_layer2 = ConvBNHS(
- in_channels=in_channels[2],
- out_channels=unified_channels,
- kernel_size=1,
- stride=1,
- padding=0
- )
- self.upsample0 = nn.Upsample(scale_factor=2, mode='nearest')
-
- self.upsample1 = nn.Upsample(scale_factor=2, mode='nearest')
-
- self.Csp_p4 = CSPBlock(
- in_channels=unified_channels*2,
- out_channels=unified_channels,
- kernel_size=5
- )
- self.Csp_p3 = CSPBlock(
- in_channels=unified_channels*2,
- out_channels=unified_channels,
- kernel_size=5
- )
- self.Csp_n3 = CSPBlock(
- in_channels=unified_channels*2,
- out_channels=unified_channels,
- kernel_size=5
- )
- self.Csp_n4 = CSPBlock(
- in_channels=unified_channels*2,
- out_channels=unified_channels,
- kernel_size=5
- )
- self.downsample2 = DPBlock(
- in_channel=unified_channels,
- out_channel=unified_channels,
- kernel_size=5,
- stride=2
- )
- self.downsample1 = DPBlock(
- in_channel=unified_channels,
- out_channel=unified_channels,
- kernel_size=5,
- stride=2
- )
- self.p6_conv_1 = DPBlock(
- in_channel=unified_channels,
- out_channel=unified_channels,
- kernel_size=5,
- stride=2
- )
- self.p6_conv_2 = DPBlock(
- in_channel=unified_channels,
- out_channel=unified_channels,
- kernel_size=5,
- stride=2
- )
-
- def forward(self, input):
-
- (x2, x1, x0) = input
-
- fpn_out0 = self.reduce_layer0(x0) #c5
- x1 = self.reduce_layer1(x1) #c4
- x2 = self.reduce_layer2(x2) #c3
-
- upsample_feat0 = self.upsample0(fpn_out0)
- f_concat_layer0 = torch.cat([upsample_feat0, x1], 1)
- f_out1 = self.Csp_p4(f_concat_layer0)
-
- upsample_feat1 = self.upsample1(f_out1)
- f_concat_layer1 = torch.cat([upsample_feat1, x2], 1)
- pan_out3 = self.Csp_p3(f_concat_layer1) #p3
-
- down_feat1 = self.downsample2(pan_out3)
- p_concat_layer1 = torch.cat([down_feat1, f_out1], 1)
- pan_out2 = self.Csp_n3(p_concat_layer1) #p4
-
- down_feat0 = self.downsample1(pan_out2)
- p_concat_layer2 = torch.cat([down_feat0, fpn_out0], 1)
- pan_out1 = self.Csp_n4(p_concat_layer2) #p5
-
- top_features = self.p6_conv_1(fpn_out0)
- pan_out0 = top_features + self.p6_conv_2(pan_out1) #p6
-
-
- outputs = [pan_out3, pan_out2, pan_out1, pan_out0]
-
- return outputs
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/yolo.py b/cv/detection/yolov6/pytorch/yolov6/models/yolo.py
deleted file mode 100644
index 2f37f1b16e159465c656303e17fd88e23646f3ae..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/yolo.py
+++ /dev/null
@@ -1,138 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from yolov6.layers.common import *
-from yolov6.utils.torch_utils import initialize_weights
-from yolov6.models.efficientrep import *
-from yolov6.models.reppan import *
-from yolov6.utils.events import LOGGER
-
-
-class Model(nn.Module):
- export = False
- '''YOLOv6 model with backbone, neck and head.
- The default parts are EfficientRep Backbone, Rep-PAN and
- Efficient Decoupled Head.
- '''
- def __init__(self, config, channels=3, num_classes=None, fuse_ab=False, distill_ns=False): # model, input channels, number of classes
- super().__init__()
- # Build network
- num_layers = config.model.head.num_layers
- self.backbone, self.neck, self.detect = build_network(config, channels, num_classes, num_layers, fuse_ab=fuse_ab, distill_ns=distill_ns)
-
- # Init Detect head
- self.stride = self.detect.stride
- self.detect.initialize_biases()
-
- # Init weights
- initialize_weights(self)
-
- def forward(self, x):
- export_mode = torch.onnx.is_in_onnx_export() or self.export
- x = self.backbone(x)
- x = self.neck(x)
- if not export_mode:
- featmaps = []
- featmaps.extend(x)
- x = self.detect(x)
- return x if export_mode is True else [x, featmaps]
-
- def _apply(self, fn):
- self = super()._apply(fn)
- self.detect.stride = fn(self.detect.stride)
- self.detect.grid = list(map(fn, self.detect.grid))
- return self
-
-
-def make_divisible(x, divisor):
- # Upward revision the value x to make it evenly divisible by the divisor.
- return math.ceil(x / divisor) * divisor
-
-
-def build_network(config, channels, num_classes, num_layers, fuse_ab=False, distill_ns=False):
- depth_mul = config.model.depth_multiple
- width_mul = config.model.width_multiple
- num_repeat_backbone = config.model.backbone.num_repeats
- channels_list_backbone = config.model.backbone.out_channels
- fuse_P2 = config.model.backbone.get('fuse_P2')
- cspsppf = config.model.backbone.get('cspsppf')
- num_repeat_neck = config.model.neck.num_repeats
- channels_list_neck = config.model.neck.out_channels
- use_dfl = config.model.head.use_dfl
- reg_max = config.model.head.reg_max
- num_repeat = [(max(round(i * depth_mul), 1) if i > 1 else i) for i in (num_repeat_backbone + num_repeat_neck)]
- channels_list = [make_divisible(i * width_mul, 8) for i in (channels_list_backbone + channels_list_neck)]
-
- block = get_block(config.training_mode)
- BACKBONE = eval(config.model.backbone.type)
- NECK = eval(config.model.neck.type)
-
- if 'CSP' in config.model.backbone.type:
-
- if "stage_block_type" in config.model.backbone:
- stage_block_type = config.model.backbone.stage_block_type
- else:
- stage_block_type = "BepC3" #default
-
- backbone = BACKBONE(
- in_channels=channels,
- channels_list=channels_list,
- num_repeats=num_repeat,
- block=block,
- csp_e=config.model.backbone.csp_e,
- fuse_P2=fuse_P2,
- cspsppf=cspsppf,
- stage_block_type=stage_block_type
- )
-
- neck = NECK(
- channels_list=channels_list,
- num_repeats=num_repeat,
- block=block,
- csp_e=config.model.neck.csp_e,
- stage_block_type=stage_block_type
- )
- else:
- backbone = BACKBONE(
- in_channels=channels,
- channels_list=channels_list,
- num_repeats=num_repeat,
- block=block,
- fuse_P2=fuse_P2,
- cspsppf=cspsppf
- )
-
- neck = NECK(
- channels_list=channels_list,
- num_repeats=num_repeat,
- block=block
- )
-
- if distill_ns:
- from yolov6.models.heads.effidehead_distill_ns import Detect, build_effidehead_layer
- if num_layers != 3:
- LOGGER.error('ERROR in: Distill mode not fit on n/s models with P6 head.\n')
- exit()
- head_layers = build_effidehead_layer(channels_list, 1, num_classes, reg_max=reg_max)
- head = Detect(num_classes, num_layers, head_layers=head_layers, use_dfl=use_dfl)
-
- elif fuse_ab:
- from yolov6.models.heads.effidehead_fuseab import Detect, build_effidehead_layer
- anchors_init = config.model.head.anchors_init
- head_layers = build_effidehead_layer(channels_list, 3, num_classes, reg_max=reg_max, num_layers=num_layers)
- head = Detect(num_classes, anchors_init, num_layers, head_layers=head_layers, use_dfl=use_dfl)
-
- else:
- from yolov6.models.effidehead import Detect, build_effidehead_layer
- head_layers = build_effidehead_layer(channels_list, 1, num_classes, reg_max=reg_max, num_layers=num_layers)
- head = Detect(num_classes, num_layers, head_layers=head_layers, use_dfl=use_dfl)
-
- return backbone, neck, head
-
-
-def build_model(cfg, num_classes, device, fuse_ab=False, distill_ns=False):
- model = Model(cfg, channels=3, num_classes=num_classes, fuse_ab=fuse_ab, distill_ns=distill_ns).to(device)
- return model
diff --git a/cv/detection/yolov6/pytorch/yolov6/models/yolo_lite.py b/cv/detection/yolov6/pytorch/yolov6/models/yolo_lite.py
deleted file mode 100644
index e36f98060bceebb808be31d028ea25aaa5eae592..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/models/yolo_lite.py
+++ /dev/null
@@ -1,88 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from yolov6.layers.common import *
-from yolov6.utils.torch_utils import initialize_weights
-from yolov6.models.reppan import *
-from yolov6.models.efficientrep import *
-from yolov6.utils.events import LOGGER
-from yolov6.models.heads.effidehead_lite import Detect, build_effidehead_layer
-
-class Model(nn.Module):
- export = False
- '''YOLOv6 model with backbone, neck and head.
- The default parts are EfficientRep Backbone, Rep-PAN and
- Efficient Decoupled Head.
- '''
- def __init__(self, config, channels=3, num_classes=None): # model, input channels, number of classes
- super().__init__()
- # Build network
- self.backbone, self.neck, self.detect = build_network(config, channels, num_classes)
-
- # Init Detect head
- self.stride = self.detect.stride
- self.detect.initialize_biases()
-
- # Init weights
- initialize_weights(self)
-
- def forward(self, x):
- export_mode = torch.onnx.is_in_onnx_export() or self.export
- x = self.backbone(x)
- x = self.neck(x)
- if not export_mode:
- featmaps = []
- featmaps.extend(x)
- x = self.detect(x)
- return x if export_mode or self.export is True else [x, featmaps]
-
- def _apply(self, fn):
- self = super()._apply(fn)
- self.detect.stride = fn(self.detect.stride)
- self.detect.grid = list(map(fn, self.detect.grid))
- return self
-
-def build_network(config, in_channels, num_classes):
- width_mul = config.model.width_multiple
-
- num_repeat_backbone = config.model.backbone.num_repeats
- out_channels_backbone = config.model.backbone.out_channels
- scale_size_backbone = config.model.backbone.scale_size
- in_channels_neck = config.model.neck.in_channels
- unified_channels_neck = config.model.neck.unified_channels
- in_channels_head = config.model.head.in_channels
- num_layers = config.model.head.num_layers
-
- BACKBONE = eval(config.model.backbone.type)
- NECK = eval(config.model.neck.type)
-
- out_channels_backbone = [make_divisible(i * width_mul)
- for i in out_channels_backbone]
- mid_channels_backbone = [make_divisible(int(i * scale_size_backbone), divisor=8)
- for i in out_channels_backbone]
- in_channels_neck = [make_divisible(i * width_mul)
- for i in in_channels_neck]
-
- backbone = BACKBONE(in_channels,
- mid_channels_backbone,
- out_channels_backbone,
- num_repeat=num_repeat_backbone)
- neck = NECK(in_channels_neck, unified_channels_neck)
- head_layers = build_effidehead_layer(in_channels_head, 1, num_classes, num_layers)
- head = Detect(num_classes, num_layers, head_layers=head_layers)
-
- return backbone, neck, head
-
-
-def build_model(cfg, num_classes, device):
- model = Model(cfg, channels=3, num_classes=num_classes).to(device)
- return model
-
-def make_divisible(v, divisor=16):
- new_v = max(divisor, int(v + divisor / 2) // divisor * divisor)
- if new_v < 0.9 * v:
- new_v += divisor
- return new_v
diff --git a/cv/detection/yolov6/pytorch/yolov6/solver/build.py b/cv/detection/yolov6/pytorch/yolov6/solver/build.py
deleted file mode 100644
index 716b0be7c4fd8311a8c07e39dd2d2a2fcfb468f8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/solver/build.py
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import math
-
-import torch
-import torch.nn as nn
-
-from yolov6.utils.events import LOGGER
-
-
-def build_optimizer(cfg, model):
- """ Build optimizer from cfg file."""
- g_bnw, g_w, g_b = [], [], []
- for v in model.modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- g_b.append(v.bias)
- if isinstance(v, nn.BatchNorm2d):
- g_bnw.append(v.weight)
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- g_w.append(v.weight)
-
- assert cfg.solver.optim == 'SGD' or 'Adam', 'ERROR: unknown optimizer, use SGD defaulted'
- if cfg.solver.optim == 'SGD':
- optimizer = torch.optim.SGD(g_bnw, lr=cfg.solver.lr0, momentum=cfg.solver.momentum, nesterov=True)
- elif cfg.solver.optim == 'Adam':
- optimizer = torch.optim.Adam(g_bnw, lr=cfg.solver.lr0, betas=(cfg.solver.momentum, 0.999))
-
- optimizer.add_param_group({'params': g_w, 'weight_decay': cfg.solver.weight_decay})
- optimizer.add_param_group({'params': g_b})
-
- del g_bnw, g_w, g_b
- return optimizer
-
-
-def build_lr_scheduler(cfg, optimizer, epochs):
- """Build learning rate scheduler from cfg file."""
- if cfg.solver.lr_scheduler == 'Cosine':
- lf = lambda x: ((1 - math.cos(x * math.pi / epochs)) / 2) * (cfg.solver.lrf - 1) + 1
- elif cfg.solver.lr_scheduler == 'Constant':
- lf = lambda x: 1.0
- else:
- LOGGER.error('unknown lr scheduler, use Cosine defaulted')
-
- scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- return scheduler, lf
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/Arial.ttf b/cv/detection/yolov6/pytorch/yolov6/utils/Arial.ttf
deleted file mode 100644
index ab68fb197d4479b3b6dec6e85bd5cbaf433a87c5..0000000000000000000000000000000000000000
Binary files a/cv/detection/yolov6/pytorch/yolov6/utils/Arial.ttf and /dev/null differ
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/RepOptimizer.py b/cv/detection/yolov6/pytorch/yolov6/utils/RepOptimizer.py
deleted file mode 100644
index c4653ac09aee5903b4a9fde30750f522f8a5822c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/RepOptimizer.py
+++ /dev/null
@@ -1,195 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from ..layers.common import RealVGGBlock, LinearAddBlock
-from torch.optim.sgd import SGD
-from yolov6.utils.events import LOGGER
-
-
-def extract_blocks_into_list(model, blocks):
- for module in model.children():
- if isinstance(module, LinearAddBlock) or isinstance(module, RealVGGBlock):
- blocks.append(module)
- else:
- extract_blocks_into_list(module, blocks)
-
-
-def extract_scales(model):
- blocks = []
- extract_blocks_into_list(model['model'], blocks)
- scales = []
- for b in blocks:
- assert isinstance(b, LinearAddBlock)
- if hasattr(b, 'scale_identity'):
- scales.append((b.scale_identity.weight.detach(), b.scale_1x1.weight.detach(), b.scale_conv.weight.detach()))
- else:
- scales.append((b.scale_1x1.weight.detach(), b.scale_conv.weight.detach()))
- print('extract scales: ', scales[-1][-2].mean(), scales[-1][-1].mean())
- return scales
-
-
-def check_keywords_in_name(name, keywords=()):
- isin = False
- for keyword in keywords:
- if keyword in name:
- isin = True
- return isin
-
-
-def set_weight_decay(model, skip_list=(), skip_keywords=(), echo=False):
- has_decay = []
- no_decay = []
-
- for name, param in model.named_parameters():
- if not param.requires_grad:
- continue # frozen weights
- if 'identity.weight' in name:
- has_decay.append(param)
- if echo:
- print(f"{name} USE weight decay")
- elif len(param.shape) == 1 or name.endswith(".bias") or (name in skip_list) or \
- check_keywords_in_name(name, skip_keywords):
- no_decay.append(param)
- if echo:
- print(f"{name} has no weight decay")
- else:
- has_decay.append(param)
- if echo:
- print(f"{name} USE weight decay")
-
- return [{'params': has_decay},
- {'params': no_decay, 'weight_decay': 0.}]
-
-
-def get_optimizer_param(args, cfg, model):
- """ Build optimizer from cfg file."""
- accumulate = max(1, round(64 / args.batch_size))
- cfg.solver.weight_decay *= args.batch_size * accumulate / 64
-
- g_bnw, g_w, g_b = [], [], []
- for v in model.modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- g_b.append(v.bias)
- if isinstance(v, nn.BatchNorm2d):
- g_bnw.append(v.weight)
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- g_w.append(v.weight)
- return [{'params': g_bnw},
- {'params': g_w, 'weight_decay': cfg.solver.weight_decay},
- {'params': g_b}]
-
-
-class RepVGGOptimizer(SGD):
- '''scales is a list, scales[i] is a triple (scale_identity.weight, scale_1x1.weight, scale_conv.weight) or a two-tuple (scale_1x1.weight, scale_conv.weight) (if the block has no scale_identity)'''
- def __init__(self, model, scales,
- args, cfg, momentum=0, dampening=0,
- weight_decay=0, nesterov=True,
- reinit=True, use_identity_scales_for_reinit=True,
- cpu_mode=False):
-
- defaults = dict(lr=cfg.solver.lr0, momentum=cfg.solver.momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov)
- if nesterov and (cfg.solver.momentum <= 0 or dampening != 0):
- raise ValueError("Nesterov momentum requires a momentum and zero dampening")
- # parameters = set_weight_decay(model)
- parameters = get_optimizer_param(args, cfg, model)
- super(SGD, self).__init__(parameters, defaults)
- self.num_layers = len(scales)
-
- blocks = []
- extract_blocks_into_list(model, blocks)
- convs = [b.conv for b in blocks]
- assert len(scales) == len(convs)
-
- if reinit:
- for m in model.modules():
- if isinstance(m, nn.BatchNorm2d):
- gamma_init = m.weight.mean()
- if gamma_init == 1.0:
- LOGGER.info('Checked. This is training from scratch.')
- else:
- LOGGER.warning('========================== Warning! Is this really training from scratch ? =================')
- LOGGER.info('##################### Re-initialize #############')
- self.reinitialize(scales, convs, use_identity_scales_for_reinit)
-
- self.generate_gradient_masks(scales, convs, cpu_mode)
-
- def reinitialize(self, scales_by_idx, conv3x3_by_idx, use_identity_scales):
- for scales, conv3x3 in zip(scales_by_idx, conv3x3_by_idx):
- in_channels = conv3x3.in_channels
- out_channels = conv3x3.out_channels
- kernel_1x1 = nn.Conv2d(in_channels, out_channels, 1, device=conv3x3.weight.device)
- if len(scales) == 2:
- conv3x3.weight.data = conv3x3.weight * scales[1].view(-1, 1, 1, 1) \
- + F.pad(kernel_1x1.weight, [1, 1, 1, 1]) * scales[0].view(-1, 1, 1, 1)
- else:
- assert len(scales) == 3
- assert in_channels == out_channels
- identity = torch.from_numpy(np.eye(out_channels, dtype=np.float32).reshape(out_channels, out_channels, 1, 1)).to(conv3x3.weight.device)
- conv3x3.weight.data = conv3x3.weight * scales[2].view(-1, 1, 1, 1) + F.pad(kernel_1x1.weight, [1, 1, 1, 1]) * scales[1].view(-1, 1, 1, 1)
- if use_identity_scales: # You may initialize the imaginary CSLA block with the trained identity_scale values. Makes almost no difference.
- identity_scale_weight = scales[0]
- conv3x3.weight.data += F.pad(identity * identity_scale_weight.view(-1, 1, 1, 1), [1, 1, 1, 1])
- else:
- conv3x3.weight.data += F.pad(identity, [1, 1, 1, 1])
-
- def generate_gradient_masks(self, scales_by_idx, conv3x3_by_idx, cpu_mode=False):
- self.grad_mask_map = {}
- for scales, conv3x3 in zip(scales_by_idx, conv3x3_by_idx):
- para = conv3x3.weight
- if len(scales) == 2:
- mask = torch.ones_like(para, device=scales[0].device) * (scales[1] ** 2).view(-1, 1, 1, 1)
- mask[:, :, 1:2, 1:2] += torch.ones(para.shape[0], para.shape[1], 1, 1, device=scales[0].device) * (scales[0] ** 2).view(-1, 1, 1, 1)
- else:
- mask = torch.ones_like(para, device=scales[0].device) * (scales[2] ** 2).view(-1, 1, 1, 1)
- mask[:, :, 1:2, 1:2] += torch.ones(para.shape[0], para.shape[1], 1, 1, device=scales[0].device) * (scales[1] ** 2).view(-1, 1, 1, 1)
- ids = np.arange(para.shape[1])
- assert para.shape[1] == para.shape[0]
- mask[ids, ids, 1:2, 1:2] += 1.0
- if cpu_mode:
- self.grad_mask_map[para] = mask
- else:
- self.grad_mask_map[para] = mask.cuda()
-
- def __setstate__(self, state):
- super(SGD, self).__setstate__(state)
- for group in self.param_groups:
- group.setdefault('nesterov', False)
-
- def step(self, closure=None):
- loss = None
- if closure is not None:
- loss = closure()
-
- for group in self.param_groups:
- weight_decay = group['weight_decay']
- momentum = group['momentum']
- dampening = group['dampening']
- nesterov = group['nesterov']
-
- for p in group['params']:
- if p.grad is None:
- continue
-
- if p in self.grad_mask_map:
- d_p = p.grad.data * self.grad_mask_map[p] # Note: multiply the mask here
- else:
- d_p = p.grad.data
-
- if weight_decay != 0:
- d_p.add_(weight_decay, p.data)
- if momentum != 0:
- param_state = self.state[p]
- if 'momentum_buffer' not in param_state:
- buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
- else:
- buf = param_state['momentum_buffer']
- buf.mul_(momentum).add_(1 - dampening, d_p)
- if nesterov:
- d_p = d_p.add(momentum, buf)
- else:
- d_p = buf
-
- p.data.add_(-group['lr'], d_p)
-
- return loss
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/checkpoint.py b/cv/detection/yolov6/pytorch/yolov6/utils/checkpoint.py
deleted file mode 100644
index c2f6239b6446c8fc19ec6ece438e06cbf22badeb..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/checkpoint.py
+++ /dev/null
@@ -1,61 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import shutil
-import torch
-import os.path as osp
-from yolov6.utils.events import LOGGER
-from yolov6.utils.torch_utils import fuse_model
-
-
-def load_state_dict(weights, model, map_location=None):
- """Load weights from checkpoint file, only assign weights those layers' name and shape are match."""
- ckpt = torch.load(weights, map_location=map_location)
- state_dict = ckpt['model'].float().state_dict()
- model_state_dict = model.state_dict()
- state_dict = {k: v for k, v in state_dict.items() if k in model_state_dict and v.shape == model_state_dict[k].shape}
- model.load_state_dict(state_dict, strict=False)
- del ckpt, state_dict, model_state_dict
- return model
-
-
-def load_checkpoint(weights, map_location=None, inplace=True, fuse=True):
- """Load model from checkpoint file."""
- LOGGER.info("Loading checkpoint from {}".format(weights))
- ckpt = torch.load(weights, map_location=map_location) # load
- model = ckpt['ema' if ckpt.get('ema') else 'model'].float()
- if fuse:
- LOGGER.info("\nFusing model...")
- model = fuse_model(model).eval()
- else:
- model = model.eval()
- return model
-
-
-def save_checkpoint(ckpt, is_best, save_dir, model_name=""):
- """ Save checkpoint to the disk."""
- if not osp.exists(save_dir):
- os.makedirs(save_dir)
- filename = osp.join(save_dir, model_name + '.pt')
- torch.save(ckpt, filename)
- if is_best:
- best_filename = osp.join(save_dir, 'best_ckpt.pt')
- shutil.copyfile(filename, best_filename)
-
-
-def strip_optimizer(ckpt_dir, epoch):
- """Delete optimizer from saved checkpoint file"""
- for s in ['best', 'last']:
- ckpt_path = osp.join(ckpt_dir, '{}_ckpt.pt'.format(s))
- if not osp.exists(ckpt_path):
- continue
- ckpt = torch.load(ckpt_path, map_location=torch.device('cpu'))
- if ckpt.get('ema'):
- ckpt['model'] = ckpt['ema'] # replace model with ema
- for k in ['optimizer', 'ema', 'updates']: # keys
- ckpt[k] = None
- ckpt['epoch'] = epoch
- ckpt['model'].half() # to FP16
- for p in ckpt['model'].parameters():
- p.requires_grad = False
- torch.save(ckpt, ckpt_path)
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/config.py b/cv/detection/yolov6/pytorch/yolov6/utils/config.py
deleted file mode 100644
index 7f9c13a3085e0738a3547fc35c5106defed4c489..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/config.py
+++ /dev/null
@@ -1,101 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-# The code is based on
-# https://github.com/open-mmlab/mmcv/blob/master/mmcv/utils/config.py
-# Copyright (c) OpenMMLab.
-
-import os.path as osp
-import shutil
-import sys
-import tempfile
-from importlib import import_module
-from addict import Dict
-
-
-class ConfigDict(Dict):
-
- def __missing__(self, name):
- raise KeyError(name)
-
- def __getattr__(self, name):
- try:
- value = super(ConfigDict, self).__getattr__(name)
- except KeyError:
- ex = AttributeError("'{}' object has no attribute '{}'".format(
- self.__class__.__name__, name))
- except Exception as e:
- ex = e
- else:
- return value
- raise ex
-
-
-class Config(object):
-
- @staticmethod
- def _file2dict(filename):
- filename = str(filename)
- if filename.endswith('.py'):
- with tempfile.TemporaryDirectory() as temp_config_dir:
- shutil.copyfile(filename,
- osp.join(temp_config_dir, '_tempconfig.py'))
- sys.path.insert(0, temp_config_dir)
- mod = import_module('_tempconfig')
- sys.path.pop(0)
- cfg_dict = {
- name: value
- for name, value in mod.__dict__.items()
- if not name.startswith('__')
- }
- # delete imported module
- del sys.modules['_tempconfig']
- else:
- raise IOError('Only .py type are supported now!')
- cfg_text = filename + '\n'
- with open(filename, 'r') as f:
- cfg_text += f.read()
-
- return cfg_dict, cfg_text
-
- @staticmethod
- def fromfile(filename):
- cfg_dict, cfg_text = Config._file2dict(filename)
- return Config(cfg_dict, cfg_text=cfg_text, filename=filename)
-
- def __init__(self, cfg_dict=None, cfg_text=None, filename=None):
- if cfg_dict is None:
- cfg_dict = dict()
- elif not isinstance(cfg_dict, dict):
- raise TypeError('cfg_dict must be a dict, but got {}'.format(
- type(cfg_dict)))
-
- super(Config, self).__setattr__('_cfg_dict', ConfigDict(cfg_dict))
- super(Config, self).__setattr__('_filename', filename)
- if cfg_text:
- text = cfg_text
- elif filename:
- with open(filename, 'r') as f:
- text = f.read()
- else:
- text = ''
- super(Config, self).__setattr__('_text', text)
-
- @property
- def filename(self):
- return self._filename
-
- @property
- def text(self):
- return self._text
-
- def __repr__(self):
- return 'Config (path: {}): {}'.format(self.filename,
- self._cfg_dict.__repr__())
-
- def __getattr__(self, name):
- return getattr(self._cfg_dict, name)
-
- def __setattr__(self, name, value):
- if isinstance(value, dict):
- value = ConfigDict(value)
- self._cfg_dict.__setattr__(name, value)
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/ema.py b/cv/detection/yolov6/pytorch/yolov6/utils/ema.py
deleted file mode 100644
index de4304f5e05da4a11b2972586beac1ffc07376c8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/ema.py
+++ /dev/null
@@ -1,59 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# The code is based on
-# https://github.com/ultralytics/yolov5/blob/master/utils/torch_utils.py
-import math
-from copy import deepcopy
-import torch
-import torch.nn as nn
-
-
-class ModelEMA:
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
- Keep a moving average of everything in the model state_dict (parameters and buffers).
- This is intended to allow functionality like
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
- A smoothed version of the weights is necessary for some training schemes to perform well.
- This class is sensitive where it is initialized in the sequence of model init,
- GPU assignment and distributed training wrappers.
- """
-
- def __init__(self, model, decay=0.9999, updates=0):
- self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
- self.updates = updates
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000))
- for param in self.ema.parameters():
- param.requires_grad_(False)
-
- def update(self, model):
- with torch.no_grad():
- self.updates += 1
- decay = self.decay(self.updates)
-
- state_dict = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
- for k, item in self.ema.state_dict().items():
- if item.dtype.is_floating_point:
- item *= decay
- item += (1 - decay) * state_dict[k].detach()
-
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
- copy_attr(self.ema, model, include, exclude)
-
-
-def copy_attr(a, b, include=(), exclude=()):
- """Copy attributes from one instance and set them to another instance."""
- for k, item in b.__dict__.items():
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
- continue
- else:
- setattr(a, k, item)
-
-
-def is_parallel(model):
- '''Return True if model's type is DP or DDP, else False.'''
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
-
-
-def de_parallel(model):
- '''De-parallelize a model. Return single-GPU model if model's type is DP or DDP.'''
- return model.module if is_parallel(model) else model
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/envs.py b/cv/detection/yolov6/pytorch/yolov6/utils/envs.py
deleted file mode 100644
index 10159a9484ed525ad5ef3826ec3db4bf70b4c9cc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/envs.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-import os
-import random
-import numpy as np
-
-import torch
-import torch.backends.cudnn as cudnn
-from yolov6.utils.events import LOGGER
-
-
-def get_envs():
- """Get PyTorch needed environments from system envirionments."""
- local_rank = int(os.getenv('LOCAL_RANK', -1))
- rank = int(os.getenv('RANK', -1))
- world_size = int(os.getenv('WORLD_SIZE', 1))
- return local_rank, rank, world_size
-
-
-def select_device(device):
- """Set devices' information to the program.
- Args:
- device: a string, like 'cpu' or '1,2,3,4'
- Returns:
- torch.device
- """
- if device == 'cpu':
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
- LOGGER.info('Using CPU for training... ')
- elif device:
- os.environ['CUDA_VISIBLE_DEVICES'] = device
- assert torch.cuda.is_available()
- nd = len(device.strip().split(','))
- LOGGER.info(f'Using {nd} GPU for training... ')
- cuda = device != 'cpu' and torch.cuda.is_available()
- device = torch.device('cuda:0' if cuda else 'cpu')
- return device
-
-
-def set_random_seed(seed, deterministic=False):
- """ Set random state to random libray, numpy, torch and cudnn.
- Args:
- seed: int value.
- deterministic: bool value.
- """
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- if deterministic:
- cudnn.deterministic = True
- cudnn.benchmark = False
- else:
- cudnn.deterministic = False
- cudnn.benchmark = True
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/events.py b/cv/detection/yolov6/pytorch/yolov6/utils/events.py
deleted file mode 100644
index bbc007afcd623c912376046e1973245be4dc8295..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/events.py
+++ /dev/null
@@ -1,55 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-import os
-import yaml
-import logging
-import shutil
-
-
-def set_logging(name=None):
- rank = int(os.getenv('RANK', -1))
- logging.basicConfig(format="%(message)s", level=logging.INFO if (rank in (-1, 0)) else logging.WARNING)
- return logging.getLogger(name)
-
-
-LOGGER = set_logging(__name__)
-NCOLS = min(100, shutil.get_terminal_size().columns)
-
-
-def load_yaml(file_path):
- """Load data from yaml file."""
- if isinstance(file_path, str):
- with open(file_path, errors='ignore') as f:
- data_dict = yaml.safe_load(f)
- return data_dict
-
-
-def save_yaml(data_dict, save_path):
- """Save data to yaml file"""
- with open(save_path, 'w') as f:
- yaml.safe_dump(data_dict, f, sort_keys=False)
-
-
-def write_tblog(tblogger, epoch, results, lrs, losses):
- """Display mAP and loss information to log."""
- tblogger.add_scalar("val/mAP@0.5", results[0], epoch + 1)
- tblogger.add_scalar("val/mAP@0.50:0.95", results[1], epoch + 1)
-
- tblogger.add_scalar("train/iou_loss", losses[0], epoch + 1)
- tblogger.add_scalar("train/dist_focalloss", losses[1], epoch + 1)
- tblogger.add_scalar("train/cls_loss", losses[2], epoch + 1)
-
- tblogger.add_scalar("x/lr0", lrs[0], epoch + 1)
- tblogger.add_scalar("x/lr1", lrs[1], epoch + 1)
- tblogger.add_scalar("x/lr2", lrs[2], epoch + 1)
-
-
-def write_tbimg(tblogger, imgs, step, type='train'):
- """Display train_batch and validation predictions to tensorboard."""
- if type == 'train':
- tblogger.add_image(f'train_batch', imgs, step + 1, dataformats='HWC')
- elif type == 'val':
- for idx, img in enumerate(imgs):
- tblogger.add_image(f'val_img_{idx + 1}', img, step + 1, dataformats='HWC')
- else:
- LOGGER.warning('WARNING: Unknown image type to visualize.\n')
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/figure_iou.py b/cv/detection/yolov6/pytorch/yolov6/utils/figure_iou.py
deleted file mode 100644
index 23248ce1749cfbb59190fcd63e27b03113fb6745..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/figure_iou.py
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import math
-import torch
-
-
-class IOUloss:
- """ Calculate IoU loss.
- """
- def __init__(self, box_format='xywh', iou_type='ciou', reduction='none', eps=1e-7):
- """ Setting of the class.
- Args:
- box_format: (string), must be one of 'xywh' or 'xyxy'.
- iou_type: (string), can be one of 'ciou', 'diou', 'giou' or 'siou'
- reduction: (string), specifies the reduction to apply to the output, must be one of 'none', 'mean','sum'.
- eps: (float), a value to avoid divide by zero error.
- """
- self.box_format = box_format
- self.iou_type = iou_type.lower()
- self.reduction = reduction
- self.eps = eps
-
- def __call__(self, box1, box2):
- """ calculate iou. box1 and box2 are torch tensor with shape [M, 4] and [Nm 4].
- """
- if box1.shape[0] != box2.shape[0]:
- box2 = box2.T
- if self.box_format == 'xyxy':
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- elif self.box_format == 'xywh':
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
- else:
- if self.box_format == 'xyxy':
- b1_x1, b1_y1, b1_x2, b1_y2 = torch.split(box1, 1, dim=-1)
- b2_x1, b2_y1, b2_x2, b2_y2 = torch.split(box2, 1, dim=-1)
-
- elif self.box_format == 'xywh':
- b1_x1, b1_y1, b1_w, b1_h = torch.split(box1, 1, dim=-1)
- b2_x1, b2_y1, b2_w, b2_h = torch.split(box2, 1, dim=-1)
- b1_x1, b1_x2 = b1_x1 - b1_w / 2, b1_x1 + b1_w / 2
- b1_y1, b1_y2 = b1_y1 - b1_h / 2, b1_y1 + b1_h / 2
- b2_x1, b2_x2 = b2_x1 - b2_w / 2, b2_x1 + b2_w / 2
- b2_y1, b2_y2 = b2_y1 - b2_h / 2, b2_y1 + b2_h / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + self.eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + self.eps
- union = w1 * h1 + w2 * h2 - inter + self.eps
- iou = inter / union
-
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if self.iou_type == 'giou':
- c_area = cw * ch + self.eps # convex area
- iou = iou - (c_area - union) / c_area
- elif self.iou_type in ['diou', 'ciou']:
- c2 = cw ** 2 + ch ** 2 + self.eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if self.iou_type == 'diou':
- iou = iou - rho2 / c2
- elif self.iou_type == 'ciou':
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + self.eps))
- iou = iou - (rho2 / c2 + v * alpha)
- elif self.iou_type == 'siou':
- # SIoU Loss https://arxiv.org/pdf/2205.12740.pdf
- s_cw = (b2_x1 + b2_x2 - b1_x1 - b1_x2) * 0.5 + self.eps
- s_ch = (b2_y1 + b2_y2 - b1_y1 - b1_y2) * 0.5 + self.eps
- sigma = torch.pow(s_cw ** 2 + s_ch ** 2, 0.5)
- sin_alpha_1 = torch.abs(s_cw) / sigma
- sin_alpha_2 = torch.abs(s_ch) / sigma
- threshold = pow(2, 0.5) / 2
- sin_alpha = torch.where(sin_alpha_1 > threshold, sin_alpha_2, sin_alpha_1)
- angle_cost = torch.cos(torch.arcsin(sin_alpha) * 2 - math.pi / 2)
- rho_x = (s_cw / cw) ** 2
- rho_y = (s_ch / ch) ** 2
- gamma = angle_cost - 2
- distance_cost = 2 - torch.exp(gamma * rho_x) - torch.exp(gamma * rho_y)
- omiga_w = torch.abs(w1 - w2) / torch.max(w1, w2)
- omiga_h = torch.abs(h1 - h2) / torch.max(h1, h2)
- shape_cost = torch.pow(1 - torch.exp(-1 * omiga_w), 4) + torch.pow(1 - torch.exp(-1 * omiga_h), 4)
- iou = iou - 0.5 * (distance_cost + shape_cost)
- loss = 1.0 - iou
-
- if self.reduction == 'sum':
- loss = loss.sum()
- elif self.reduction == 'mean':
- loss = loss.mean()
-
- return loss
-
-
-def pairwise_bbox_iou(box1, box2, box_format='xywh'):
- """Calculate iou.
- This code is based on https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/utils/boxes.py
- """
- if box_format == 'xyxy':
- lt = torch.max(box1[:, None, :2], box2[:, :2])
- rb = torch.min(box1[:, None, 2:], box2[:, 2:])
- area_1 = torch.prod(box1[:, 2:] - box1[:, :2], 1)
- area_2 = torch.prod(box2[:, 2:] - box2[:, :2], 1)
-
- elif box_format == 'xywh':
- lt = torch.max(
- (box1[:, None, :2] - box1[:, None, 2:] / 2),
- (box2[:, :2] - box2[:, 2:] / 2),
- )
- rb = torch.min(
- (box1[:, None, :2] + box1[:, None, 2:] / 2),
- (box2[:, :2] + box2[:, 2:] / 2),
- )
-
- area_1 = torch.prod(box1[:, 2:], 1)
- area_2 = torch.prod(box2[:, 2:], 1)
- valid = (lt < rb).type(lt.type()).prod(dim=2)
- inter = torch.prod(rb - lt, 2) * valid
- return inter / (area_1[:, None] + area_2 - inter)
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/general.py b/cv/detection/yolov6/pytorch/yolov6/utils/general.py
deleted file mode 100644
index cb4418cde08654efc5e2a61fbf6ccca5f76b1b1b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/general.py
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import glob
-import math
-import torch
-import requests
-import pkg_resources as pkg
-from pathlib import Path
-from yolov6.utils.events import LOGGER
-
-def increment_name(path):
- '''increase save directory's id'''
- path = Path(path)
- sep = ''
- if path.exists():
- path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '')
- for n in range(1, 9999):
- p = f'{path}{sep}{n}{suffix}'
- if not os.path.exists(p):
- break
- path = Path(p)
- return path
-
-
-def find_latest_checkpoint(search_dir='.'):
- '''Find the most recent saved checkpoint in search_dir.'''
- checkpoint_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(checkpoint_list, key=os.path.getctime) if checkpoint_list else ''
-
-
-def dist2bbox(distance, anchor_points, box_format='xyxy'):
- '''Transform distance(ltrb) to box(xywh or xyxy).'''
- lt, rb = torch.split(distance, 2, -1)
- x1y1 = anchor_points - lt
- x2y2 = anchor_points + rb
- if box_format == 'xyxy':
- bbox = torch.cat([x1y1, x2y2], -1)
- elif box_format == 'xywh':
- c_xy = (x1y1 + x2y2) / 2
- wh = x2y2 - x1y1
- bbox = torch.cat([c_xy, wh], -1)
- return bbox
-
-
-def bbox2dist(anchor_points, bbox, reg_max):
- '''Transform bbox(xyxy) to dist(ltrb).'''
- x1y1, x2y2 = torch.split(bbox, 2, -1)
- lt = anchor_points - x1y1
- rb = x2y2 - anchor_points
- dist = torch.cat([lt, rb], -1).clip(0, reg_max - 0.01)
- return dist
-
-
-def xywh2xyxy(bboxes):
- '''Transform bbox(xywh) to box(xyxy).'''
- bboxes[..., 0] = bboxes[..., 0] - bboxes[..., 2] * 0.5
- bboxes[..., 1] = bboxes[..., 1] - bboxes[..., 3] * 0.5
- bboxes[..., 2] = bboxes[..., 0] + bboxes[..., 2]
- bboxes[..., 3] = bboxes[..., 1] + bboxes[..., 3]
- return bboxes
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def download_ckpt(path):
- """Download checkpoints of the pretrained models"""
- basename = os.path.basename(path)
- dir = os.path.abspath(os.path.dirname(path))
- os.makedirs(dir, exist_ok=True)
- LOGGER.info(f"checkpoint {basename} not exist, try to downloaded it from github.")
- # need to update the link with every release
- url = f"https://github.com/meituan/YOLOv6/releases/download/0.4.0/{basename}"
- LOGGER.warning(f"downloading url is: {url}, pealse make sure the version of the downloading model is correspoing to the code version!")
- r = requests.get(url, allow_redirects=True)
- assert r.status_code == 200, "Unable to download checkpoints, manually download it"
- open(path, 'wb').write(r.content)
- LOGGER.info(f"checkpoint {basename} downloaded and saved")
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def check_img_size(imgsz, s=32, floor=0):
- # Verify image size is a multiple of stride s in each dimension
- if isinstance(imgsz, int): # integer i.e. img_size=640
- new_size = max(make_divisible(imgsz, int(s)), floor)
- else: # list i.e. img_size=[640, 480]
- new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz]
- if new_size != imgsz:
- LOGGER.warning(f'--img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}')
- return new_size
-
-
-def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False, verbose=False):
- # Check whether the package's version is match the required version.
- current, minimum = (pkg.parse_version(x) for x in (current, minimum))
- result = (current == minimum) if pinned else (current >= minimum) # bool
- if hard:
- info = f'⚠️ {name}{minimum} is required by YOLOv6, but {name}{current} is currently installed'
- assert result, info # assert minimum version requirement
- return result
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/metrics.py b/cv/detection/yolov6/pytorch/yolov6/utils/metrics.py
deleted file mode 100644
index cbfa130ef5442fdcc53f17ccf0af82725a3f4047..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/metrics.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Model validation metrics
-# This code is based on
-# https://github.com/ultralytics/yolov5/blob/master/utils/metrics.py
-
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-import warnings
-from . import general
-
-def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='.', names=()):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = (target_cls == c).sum() # number of labels
- n_p = i.sum() # number of predictions
-
- if n_p == 0 or n_l == 0:
- continue
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + 1e-16) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j])
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + 1e-16)
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
-
- # i = f1.mean(0).argmax() # max F1 index
- # return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
- return p, r, ap, f1, unique_classes.astype('int32')
-
-
-def compute_ap(recall, precision):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
- mpre = np.concatenate(([1.], precision, [0.]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-
-def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = py.mean(0)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-def process_batch(detections, labels, iouv):
- """
- Return correct predictions matrix. Both sets of boxes are in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- correct (Array[N, 10]), for 10 IoU levels
- """
- correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool)
- iou = general.box_iou(labels[:, 1:], detections[:, :4])
- correct_class = labels[:, 0:1] == detections[:, 5]
- for i in range(len(iouv)):
- x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou]
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- # matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- correct[matches[:, 1].astype(int), i] = True
- return torch.tensor(correct, dtype=torch.bool, device=iouv.device)
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = general.box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(int)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[detection_classes[m1[j]], gc] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # background FP
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # background FN
-
- def matrix(self):
- return self.matrix
-
- def tp_fp(self):
- tp = self.matrix.diagonal() # true positives
- fp = self.matrix.sum(1) - tp # false positives
- # fn = self.matrix.sum(0) - tp # false negatives (missed detections)
- return tp[:-1], fp[:-1] # remove background class
-
- def plot(self, normalize=True, save_dir='', names=()):
- try:
- import seaborn as sn
-
- array = self.matrix / ((self.matrix.sum(0).reshape(1, -1) + 1E-9) if normalize else 1) # normalize columns
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig = plt.figure(figsize=(12, 9), tight_layout=True)
- nc, nn = self.nc, len(names) # number of classes, names
- sn.set(font_scale=1.0 if nc < 50 else 0.8) # for label size
- labels = (0 < nn < 99) and (nn == nc) # apply names to ticklabels
- with warnings.catch_warnings():
- warnings.simplefilter('ignore') # suppress empty matrix RuntimeWarning: All-NaN slice encountered
- sn.heatmap(array,
- annot=nc < 30,
- annot_kws={
- "size": 8},
- cmap='Blues',
- fmt='.2f',
- square=True,
- vmin=0.0,
- xticklabels=names + ['background FP'] if labels else "auto",
- yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
- fig.axes[0].set_xlabel('True')
- fig.axes[0].set_ylabel('Predicted')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- plt.close()
- except Exception as e:
- print(f'WARNING: ConfusionMatrix plot failure: {e}')
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/nms.py b/cv/detection/yolov6/pytorch/yolov6/utils/nms.py
deleted file mode 100644
index 0f8126427bdef674ead826a89c325303c59cf580..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/nms.py
+++ /dev/null
@@ -1,105 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# The code is based on
-# https://github.com/ultralytics/yolov5/blob/master/utils/general.py
-
-import os
-import time
-import numpy as np
-import cv2
-import torch
-import torchvision
-
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-def xywh2xyxy(x):
- '''Convert boxes with shape [n, 4] from [x, y, w, h] to [x1, y1, x2, y2] where x1y1 is top-left, x2y2=bottom-right.'''
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, max_det=300):
- """Runs Non-Maximum Suppression (NMS) on inference results.
- This code is borrowed from: https://github.com/ultralytics/yolov5/blob/47233e1698b89fc437a4fb9463c815e9171be955/utils/general.py#L775
- Args:
- prediction: (tensor), with shape [N, 5 + num_classes], N is the number of bboxes.
- conf_thres: (float) confidence threshold.
- iou_thres: (float) iou threshold.
- classes: (None or list[int]), if a list is provided, nms only keep the classes you provide.
- agnostic: (bool), when it is set to True, we do class-independent nms, otherwise, different class would do nms respectively.
- multi_label: (bool), when it is set to True, one box can have multi labels, otherwise, one box only huave one label.
- max_det:(int), max number of output bboxes.
-
- Returns:
- list of detections, echo item is one tensor with shape (num_boxes, 6), 6 is for [xyxy, conf, cls].
- """
-
- num_classes = prediction.shape[2] - 5 # number of classes
- pred_candidates = torch.logical_and(prediction[..., 4] > conf_thres, torch.max(prediction[..., 5:], axis=-1)[0] > conf_thres) # candidates
- # Check the parameters.
- assert 0 <= conf_thres <= 1, f'conf_thresh must be in 0.0 to 1.0, however {conf_thres} is provided.'
- assert 0 <= iou_thres <= 1, f'iou_thres must be in 0.0 to 1.0, however {iou_thres} is provided.'
-
- # Function settings.
- max_wh = 4096 # maximum box width and height
- max_nms = 30000 # maximum number of boxes put into torchvision.ops.nms()
- time_limit = 10.0 # quit the function when nms cost time exceed the limit time.
- multi_label &= num_classes > 1 # multiple labels per box
-
- tik = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for img_idx, x in enumerate(prediction): # image index, image inference
- x = x[pred_candidates[img_idx]] # confidence
-
- # If no box remains, skip the next process.
- if not x.shape[0]:
- continue
-
- # confidence multiply the objectness
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix's shape is (n,6), each row represents (xyxy, conf, cls)
- if multi_label:
- box_idx, class_idx = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[box_idx], x[box_idx, class_idx + 5, None], class_idx[:, None].float()), 1)
- else: # Only keep the class with highest scores.
- conf, class_idx = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, class_idx.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class, only keep boxes whose category is in classes.
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Check shape
- num_box = x.shape[0] # number of boxes
- if not num_box: # no boxes kept.
- continue
- elif num_box > max_nms: # excess max boxes' number.
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- class_offset = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + class_offset, x[:, 4] # boxes (offset by class), scores
- keep_box_idx = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if keep_box_idx.shape[0] > max_det: # limit detections
- keep_box_idx = keep_box_idx[:max_det]
-
- output[img_idx] = x[keep_box_idx]
- if (time.time() - tik) > time_limit:
- print(f'WARNING: NMS cost time exceed the limited {time_limit}s.')
- break # time limit exceeded
-
- return output
diff --git a/cv/detection/yolov6/pytorch/yolov6/utils/torch_utils.py b/cv/detection/yolov6/pytorch/yolov6/utils/torch_utils.py
deleted file mode 100644
index 6d2b09cf0f3947427ae7982826a65c45aa1aa473..0000000000000000000000000000000000000000
--- a/cv/detection/yolov6/pytorch/yolov6/utils/torch_utils.py
+++ /dev/null
@@ -1,111 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-
-import time
-from contextlib import contextmanager
-from copy import deepcopy
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-from yolov6.utils.events import LOGGER
-
-try:
- import thop # for FLOPs computation
-except ImportError:
- thop = None
-
-
-@contextmanager
-def torch_distributed_zero_first(local_rank: int):
- """
- Decorator to make all processes in distributed training wait for each local_master to do something.
- """
- if local_rank not in [-1, 0]:
- dist.barrier(device_ids=[local_rank])
- yield
- if local_rank == 0:
- dist.barrier(device_ids=[0])
-
-
-def time_sync():
- '''Waits for all kernels in all streams on a CUDA device to complete if cuda is available.'''
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- return time.time()
-
-
-def initialize_weights(model):
- for m in model.modules():
- t = type(m)
- if t is nn.Conv2d:
- pass
- elif t is nn.BatchNorm2d:
- m.eps = 1e-3
- m.momentum = 0.03
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
- m.inplace = True
-
-
-def fuse_conv_and_bn(conv, bn):
- '''Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/.'''
- fusedconv = (
- nn.Conv2d(
- conv.in_channels,
- conv.out_channels,
- kernel_size=conv.kernel_size,
- stride=conv.stride,
- padding=conv.padding,
- groups=conv.groups,
- bias=True,
- )
- .requires_grad_(False)
- .to(conv.weight.device)
- )
-
- # prepare filters
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
-
- # prepare spatial bias
- b_conv = (
- torch.zeros(conv.weight.size(0), device=conv.weight.device)
- if conv.bias is None
- else conv.bias
- )
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(
- torch.sqrt(bn.running_var + bn.eps)
- )
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
-
- return fusedconv
-
-
-def fuse_model(model):
- '''Fuse convolution and batchnorm layers of the model.'''
- from yolov6.layers.common import ConvModule
-
- for m in model.modules():
- if type(m) is ConvModule and hasattr(m, "bn"):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, "bn") # remove batchnorm
- m.forward = m.forward_fuse # update forward
- return model
-
-
-def get_model_info(model, img_size=640):
- """Get model Params and GFlops.
- Code base on https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/utils/model_utils.py
- """
- from thop import profile
- stride = 64 #32
- img = torch.zeros((1, 3, stride, stride), device=next(model.parameters()).device)
-
- flops, params = profile(deepcopy(model), inputs=(img,), verbose=False)
- params /= 1e6
- flops /= 1e9
- img_size = img_size if isinstance(img_size, list) else [img_size, img_size]
- flops *= img_size[0] * img_size[1] / stride / stride * 2 # Gflops
- info = "Params: {:.2f}M, Gflops: {:.2f}".format(params, flops)
- return info
diff --git a/cv/detection/yolov7/pytorch/.gitignore b/cv/detection/yolov7/pytorch/.gitignore
deleted file mode 100644
index d1bbbbe3982b4485ed9d9ab5bf73971325c8d765..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/.gitignore
+++ /dev/null
@@ -1,263 +0,0 @@
-# Repo-specific GitIgnore ----------------------------------------------------------------------------------------------
-*.jpg
-*.jpeg
-*.png
-*.bmp
-*.tif
-*.tiff
-*.heic
-*.JPG
-*.JPEG
-*.PNG
-*.BMP
-*.TIF
-*.TIFF
-*.HEIC
-*.mp4
-*.mov
-*.MOV
-*.avi
-*.data
-*.json
-*.cfg
-!setup.cfg
-!cfg/yolov3*.cfg
-
-storage.googleapis.com
-runs/*
-data/*
-data/images/*
-!data/*.yaml
-!data/hyps
-!data/scripts
-!data/images
-!data/images/zidane.jpg
-!data/images/bus.jpg
-!data/*.sh
-
-results*.csv
-
-# Datasets -------------------------------------------------------------------------------------------------------------
-coco/
-coco128/
-VOC/
-
-coco2017labels-segments.zip
-test2017.zip
-train2017.zip
-val2017.zip
-
-# MATLAB GitIgnore -----------------------------------------------------------------------------------------------------
-*.m~
-*.mat
-!targets*.mat
-
-# Neural Network weights -----------------------------------------------------------------------------------------------
-*.weights
-*.pt
-*.pb
-*.onnx
-*.engine
-*.mlmodel
-*.torchscript
-*.tflite
-*.h5
-*_saved_model/
-*_web_model/
-*_openvino_model/
-darknet53.conv.74
-yolov3-tiny.conv.15
-*.ptl
-*.trt
-
-# GitHub Python GitIgnore ----------------------------------------------------------------------------------------------
-# Byte-compiled / optimized / DLL files
-__pycache__/
-*.py[cod]
-*$py.class
-
-# C extensions
-*.so
-
-# Distribution / packaging
-.Python
-env/
-build/
-develop-eggs/
-dist/
-downloads/
-eggs/
-.eggs/
-lib/
-lib64/
-parts/
-sdist/
-var/
-wheels/
-*.egg-info/
-/wandb/
-.installed.cfg
-*.egg
-
-
-# PyInstaller
-# Usually these files are written by a python script from a template
-# before PyInstaller builds the exe, so as to inject date/other infos into it.
-*.manifest
-*.spec
-
-# Installer logs
-pip-log.txt
-pip-delete-this-directory.txt
-
-# Unit test / coverage reports
-htmlcov/
-.tox/
-.coverage
-.coverage.*
-.cache
-nosetests.xml
-coverage.xml
-*.cover
-.hypothesis/
-
-# Translations
-*.mo
-*.pot
-
-# Django stuff:
-*.log
-local_settings.py
-
-# Flask stuff:
-instance/
-.webassets-cache
-
-# Scrapy stuff:
-.scrapy
-
-# Sphinx documentation
-docs/_build/
-
-# PyBuilder
-target/
-
-# Jupyter Notebook
-.ipynb_checkpoints
-
-# pyenv
-.python-version
-
-# celery beat schedule file
-celerybeat-schedule
-
-# SageMath parsed files
-*.sage.py
-
-# dotenv
-.env
-
-# virtualenv
-.venv*
-venv*/
-ENV*/
-
-# Spyder project settings
-.spyderproject
-.spyproject
-
-# Rope project settings
-.ropeproject
-
-# mkdocs documentation
-/site
-
-# mypy
-.mypy_cache/
-
-
-# https://github.com/github/gitignore/blob/master/Global/macOS.gitignore -----------------------------------------------
-
-# General
-.DS_Store
-.AppleDouble
-.LSOverride
-
-# Icon must end with two \r
-Icon
-Icon?
-
-# Thumbnails
-._*
-
-# Files that might appear in the root of a volume
-.DocumentRevisions-V100
-.fseventsd
-.Spotlight-V100
-.TemporaryItems
-.Trashes
-.VolumeIcon.icns
-.com.apple.timemachine.donotpresent
-
-# Directories potentially created on remote AFP share
-.AppleDB
-.AppleDesktop
-Network Trash Folder
-Temporary Items
-.apdisk
-
-
-# https://github.com/github/gitignore/blob/master/Global/JetBrains.gitignore
-# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
-# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
-
-# User-specific stuff:
-.idea/*
-.idea/**/workspace.xml
-.idea/**/tasks.xml
-.idea/dictionaries
-.html # Bokeh Plots
-.pg # TensorFlow Frozen Graphs
-.avi # videos
-
-# Sensitive or high-churn files:
-.idea/**/dataSources/
-.idea/**/dataSources.ids
-.idea/**/dataSources.local.xml
-.idea/**/sqlDataSources.xml
-.idea/**/dynamic.xml
-.idea/**/uiDesigner.xml
-
-# Gradle:
-.idea/**/gradle.xml
-.idea/**/libraries
-
-# CMake
-cmake-build-debug/
-cmake-build-release/
-
-# Mongo Explorer plugin:
-.idea/**/mongoSettings.xml
-
-## File-based project format:
-*.iws
-
-## Plugin-specific files:
-
-# IntelliJ
-out/
-
-# mpeltonen/sbt-idea plugin
-.idea_modules/
-
-# JIRA plugin
-atlassian-ide-plugin.xml
-
-# Cursive Clojure plugin
-.idea/replstate.xml
-
-# Crashlytics plugin (for Android Studio and IntelliJ)
-com_crashlytics_export_strings.xml
-crashlytics.properties
-crashlytics-build.properties
-fabric.properties
diff --git a/cv/detection/yolov7/pytorch/LICENSE.md b/cv/detection/yolov7/pytorch/LICENSE.md
deleted file mode 100644
index f288702d2fa16d3cdf0035b15a9fcbc552cd88e7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/LICENSE.md
+++ /dev/null
@@ -1,674 +0,0 @@
- GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
diff --git a/cv/detection/yolov7/pytorch/README.md b/cv/detection/yolov7/pytorch/README.md
index 63f86e60325637270fbf9ba6dda0aa3d5d58cd5d..0457b7fdc02cae1c91039915e6bdddfa98f24b97 100644
--- a/cv/detection/yolov7/pytorch/README.md
+++ b/cv/detection/yolov7/pytorch/README.md
@@ -1,10 +1,15 @@
# YOLOv7
## Model description
+
Implementation of paper - [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)
## Step 1: Installing packages
-```
+
+```bash
+## clone yolov7 and install
+git clone https://gitee.com/deep-spark/deepsparkhub-GPL.git
+cd deepsparkhub-GPL/cv/detection/yolov7/pytorch/
pip3 install -r requirements.txt
```
@@ -34,18 +39,23 @@ coco2017
```
Modify the configuration file(data/coco.yaml)
+
+```bash
+vim data/coco.yaml
+# path: the root of coco data
+# train: the relative path of train images
+# val: the relative path of valid images
```
-$ vim data/coco.yaml
-$ # path: the root of coco data
-$ # train: the relative path of train images
-$ # val: the relative path of valid images
-```
+
The train2017.txt and val2017.txt file you can get from:
-```
+
+```bash
wget https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip
```
+
The datasets format as follows:
-```
+
+```bash
coco
|- iamges
|- train2017
@@ -58,16 +68,19 @@ The datasets format as follows:
```
-## Training
+## Step 3: Training
-Train the yolov5 model as follows, the train log is saved in ./runs/train/exp
+Train the yolov7 model as follows, the train log is saved in ./runs/train/exp
### Single GPU training
-```
+
+```bash
python3 train.py --workers 8 --device 0 --batch-size 32 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml
```
+
### Multiple GPU training
-```
+
+```bash
python3 -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 64 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml
```
@@ -75,26 +88,30 @@ python3 -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.
[`yolov7_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7_training.pt) [`yolov7x_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x_training.pt) [`yolov7-w6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-w6_training.pt) [`yolov7-e6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6_training.pt) [`yolov7-d6_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-d6_training.pt) [`yolov7-e6e_training.pt`](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-e6e_training.pt)
-```
+```bash
python3 train.py --workers 8 --device 0 --batch-size 32 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights 'yolov7_training.pt' --name yolov7-custom --hyp data/hyp.scratch.custom.yaml
```
## Inference
On video:
-```
+
+```bash
python3 detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source yourvideo.mp4
```
On image:
-```
+
+```bash
python3 detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg
```
+
## Results
+
| Model | Test Size | APtest | AP50test |
| :-- | :-: | :-: | :-: |
| [**YOLOv7**](https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt) | 640 | **49.4%** | **68.6%** |
-
## Reference
-https://github.com/WongKinYiu/yolov7
+
+= [YOLOv7](https://github.com/WongKinYiu/yolov7)
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/r50-csp.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/r50-csp.yaml
deleted file mode 100644
index 94559f7d0c07e675dd48795ddc819b637f62326c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/r50-csp.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# CSP-ResNet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Stem, [128]], # 0-P1/2
- [-1, 3, ResCSPC, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 2-P3/8
- [-1, 4, ResCSPC, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 4-P3/8
- [-1, 6, ResCSPC, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 6-P3/8
- [-1, 3, ResCSPC, [1024]], # 7
- ]
-
-# CSP-Res-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 8
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [5, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, ResCSPB, [256]], # 13
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [3, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, ResCSPB, [128]], # 18
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 13], 1, Concat, [1]], # cat
- [-1, 2, ResCSPB, [256]], # 22
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [512, 3, 2]],
- [[-1, 8], 1, Concat, [1]], # cat
- [-1, 2, ResCSPB, [512]], # 26
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[19,23,27], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/x50-csp.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/x50-csp.yaml
deleted file mode 100644
index 8de14f81ab9df369bd5ff9ff4805de4861137ed2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/x50-csp.yaml
+++ /dev/null
@@ -1,49 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# CSP-ResNeXt backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Stem, [128]], # 0-P1/2
- [-1, 3, ResXCSPC, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 2-P3/8
- [-1, 4, ResXCSPC, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 4-P3/8
- [-1, 6, ResXCSPC, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 6-P3/8
- [-1, 3, ResXCSPC, [1024]], # 7
- ]
-
-# CSP-ResX-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 8
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [5, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, ResXCSPB, [256]], # 13
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [3, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, ResXCSPB, [128]], # 18
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 13], 1, Concat, [1]], # cat
- [-1, 2, ResXCSPB, [256]], # 22
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [512, 3, 2]],
- [[-1, 8], 1, Concat, [1]], # cat
- [-1, 2, ResXCSPB, [512]], # 26
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[19,23,27], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-csp-x.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolor-csp-x.yaml
deleted file mode 100644
index 6e234c5c2d71ba6e3ba5061e180682a43c31a7d5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-csp-x.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.33 # model depth multiple
-width_multiple: 1.25 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Bottleneck, [64]],
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 2, BottleneckCSPC, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
- [-1, 8, BottleneckCSPC, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
- [-1, 8, BottleneckCSPC, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
- [-1, 4, BottleneckCSPC, [1024]], # 10
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 11
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [8, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, BottleneckCSPB, [256]], # 16
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [6, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, BottleneckCSPB, [128]], # 21
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 16], 1, Concat, [1]], # cat
- [-1, 2, BottleneckCSPB, [256]], # 25
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [512, 3, 2]],
- [[-1, 11], 1, Concat, [1]], # cat
- [-1, 2, BottleneckCSPB, [512]], # 29
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[22,26,30], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-csp.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolor-csp.yaml
deleted file mode 100644
index 3beecf3ddc10f68fdd5e01f38a1c4cf25b6208b3..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-csp.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Bottleneck, [64]],
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 2, BottleneckCSPC, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
- [-1, 8, BottleneckCSPC, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
- [-1, 8, BottleneckCSPC, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
- [-1, 4, BottleneckCSPC, [1024]], # 10
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 11
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [8, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, BottleneckCSPB, [256]], # 16
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [6, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, BottleneckCSPB, [128]], # 21
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 16], 1, Concat, [1]], # cat
- [-1, 2, BottleneckCSPB, [256]], # 25
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [512, 3, 2]],
- [[-1, 11], 1, Concat, [1]], # cat
- [-1, 2, BottleneckCSPB, [512]], # 29
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[22,26,30], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-d6.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolor-d6.yaml
deleted file mode 100644
index 297b0d1c2424ee532053a568109da371fbbdd18d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-d6.yaml
+++ /dev/null
@@ -1,63 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # expand model depth
-width_multiple: 1.25 # expand layer channels
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [64, 3, 1]], # 1-P1/2
- [-1, 1, DownC, [128]], # 2-P2/4
- [-1, 3, BottleneckCSPA, [128]],
- [-1, 1, DownC, [256]], # 4-P3/8
- [-1, 15, BottleneckCSPA, [256]],
- [-1, 1, DownC, [512]], # 6-P4/16
- [-1, 15, BottleneckCSPA, [512]],
- [-1, 1, DownC, [768]], # 8-P5/32
- [-1, 7, BottleneckCSPA, [768]],
- [-1, 1, DownC, [1024]], # 10-P6/64
- [-1, 7, BottleneckCSPA, [1024]], # 11
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 12
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-6, 1, Conv, [384, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [384]], # 17
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-13, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [256]], # 22
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-20, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [128]], # 27
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, DownC, [256]],
- [[-1, 22], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [256]], # 31
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, DownC, [384]],
- [[-1, 17], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [384]], # 35
- [-1, 1, Conv, [768, 3, 1]],
- [-2, 1, DownC, [512]],
- [[-1, 12], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [512]], # 39
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[28,32,36,40], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-e6.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolor-e6.yaml
deleted file mode 100644
index 58afc5ba1a9771d757310dcc1b4f1c185087d642..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-e6.yaml
+++ /dev/null
@@ -1,63 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # expand model depth
-width_multiple: 1.25 # expand layer channels
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [64, 3, 1]], # 1-P1/2
- [-1, 1, DownC, [128]], # 2-P2/4
- [-1, 3, BottleneckCSPA, [128]],
- [-1, 1, DownC, [256]], # 4-P3/8
- [-1, 7, BottleneckCSPA, [256]],
- [-1, 1, DownC, [512]], # 6-P4/16
- [-1, 7, BottleneckCSPA, [512]],
- [-1, 1, DownC, [768]], # 8-P5/32
- [-1, 3, BottleneckCSPA, [768]],
- [-1, 1, DownC, [1024]], # 10-P6/64
- [-1, 3, BottleneckCSPA, [1024]], # 11
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 12
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-6, 1, Conv, [384, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [384]], # 17
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-13, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [256]], # 22
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-20, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [128]], # 27
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, DownC, [256]],
- [[-1, 22], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [256]], # 31
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, DownC, [384]],
- [[-1, 17], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [384]], # 35
- [-1, 1, Conv, [768, 3, 1]],
- [-2, 1, DownC, [512]],
- [[-1, 12], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [512]], # 39
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[28,32,36,40], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-p6.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolor-p6.yaml
deleted file mode 100644
index 924cf5cf453d82af810f4d172e6d41b10162fb87..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-p6.yaml
+++ /dev/null
@@ -1,63 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # expand model depth
-width_multiple: 1.0 # expand layer channels
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [64, 3, 1]], # 1-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 2-P2/4
- [-1, 3, BottleneckCSPA, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 4-P3/8
- [-1, 7, BottleneckCSPA, [256]],
- [-1, 1, Conv, [384, 3, 2]], # 6-P4/16
- [-1, 7, BottleneckCSPA, [384]],
- [-1, 1, Conv, [512, 3, 2]], # 8-P5/32
- [-1, 3, BottleneckCSPA, [512]],
- [-1, 1, Conv, [640, 3, 2]], # 10-P6/64
- [-1, 3, BottleneckCSPA, [640]], # 11
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [320]], # 12
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-6, 1, Conv, [256, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [256]], # 17
- [-1, 1, Conv, [192, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-13, 1, Conv, [192, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [192]], # 22
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-20, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [128]], # 27
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [192, 3, 2]],
- [[-1, 22], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [192]], # 31
- [-1, 1, Conv, [384, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 17], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [256]], # 35
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [320, 3, 2]],
- [[-1, 12], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [320]], # 39
- [-1, 1, Conv, [640, 3, 1]],
-
- [[28,32,36,40], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-w6.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolor-w6.yaml
deleted file mode 100644
index a2fc969693debe81f9f755ec2a29730bf341fcad..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolor-w6.yaml
+++ /dev/null
@@ -1,63 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # expand model depth
-width_multiple: 1.0 # expand layer channels
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [64, 3, 1]], # 1-P1/2
- [-1, 1, Conv, [128, 3, 2]], # 2-P2/4
- [-1, 3, BottleneckCSPA, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 4-P3/8
- [-1, 7, BottleneckCSPA, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 6-P4/16
- [-1, 7, BottleneckCSPA, [512]],
- [-1, 1, Conv, [768, 3, 2]], # 8-P5/32
- [-1, 3, BottleneckCSPA, [768]],
- [-1, 1, Conv, [1024, 3, 2]], # 10-P6/64
- [-1, 3, BottleneckCSPA, [1024]], # 11
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 12
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-6, 1, Conv, [384, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [384]], # 17
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-13, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [256]], # 22
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [-20, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 3, BottleneckCSPB, [128]], # 27
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 22], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [256]], # 31
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [384, 3, 2]],
- [[-1, 17], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [384]], # 35
- [-1, 1, Conv, [768, 3, 1]],
- [-2, 1, Conv, [512, 3, 2]],
- [[-1, 12], 1, Concat, [1]], # cat
- [-1, 3, BottleneckCSPB, [512]], # 39
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[28,32,36,40], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolov3-spp.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolov3-spp.yaml
deleted file mode 100644
index 38dcc449f0d0c1b85b4e6ff426da0d9e9df07d4e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolov3-spp.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# darknet53 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Bottleneck, [64]],
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 2, Bottleneck, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
- [-1, 8, Bottleneck, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
- [-1, 8, Bottleneck, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
- [-1, 4, Bottleneck, [1024]], # 10
- ]
-
-# YOLOv3-SPP head
-head:
- [[-1, 1, Bottleneck, [1024, False]],
- [-1, 1, SPP, [512, [5, 9, 13]]],
- [-1, 1, Conv, [1024, 3, 1]],
- [-1, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
-
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 8], 1, Concat, [1]], # cat backbone P4
- [-1, 1, Bottleneck, [512, False]],
- [-1, 1, Bottleneck, [512, False]],
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
-
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P3
- [-1, 1, Bottleneck, [256, False]],
- [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
-
- [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolov3.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolov3.yaml
deleted file mode 100644
index f2e76135546945f3ccbb3311c99bf3882a90c199..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolov3.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# darknet53 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Bottleneck, [64]],
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 2, Bottleneck, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
- [-1, 8, Bottleneck, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
- [-1, 8, Bottleneck, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
- [-1, 4, Bottleneck, [1024]], # 10
- ]
-
-# YOLOv3 head
-head:
- [[-1, 1, Bottleneck, [1024, False]],
- [-1, 1, Conv, [512, [1, 1]]],
- [-1, 1, Conv, [1024, 3, 1]],
- [-1, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
-
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 8], 1, Concat, [1]], # cat backbone P4
- [-1, 1, Bottleneck, [512, False]],
- [-1, 1, Bottleneck, [512, False]],
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
-
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [[-1, 6], 1, Concat, [1]], # cat backbone P3
- [-1, 1, Bottleneck, [256, False]],
- [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
-
- [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/baseline/yolov4-csp.yaml b/cv/detection/yolov7/pytorch/cfg/baseline/yolov4-csp.yaml
deleted file mode 100644
index 3c908c7867399b08ec4dd09ea1a5c0219437f2e1..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/baseline/yolov4-csp.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# CSP-Darknet backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Bottleneck, [64]],
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 2, BottleneckCSPC, [128]],
- [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
- [-1, 8, BottleneckCSPC, [256]],
- [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
- [-1, 8, BottleneckCSPC, [512]],
- [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
- [-1, 4, BottleneckCSPC, [1024]], # 10
- ]
-
-# CSP-Dark-PAN head
-head:
- [[-1, 1, SPPCSPC, [512]], # 11
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [8, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, BottleneckCSPB, [256]], # 16
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [6, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
- [-1, 2, BottleneckCSPB, [128]], # 21
- [-1, 1, Conv, [256, 3, 1]],
- [-2, 1, Conv, [256, 3, 2]],
- [[-1, 16], 1, Concat, [1]], # cat
- [-1, 2, BottleneckCSPB, [256]], # 25
- [-1, 1, Conv, [512, 3, 1]],
- [-2, 1, Conv, [512, 3, 2]],
- [[-1, 11], 1, Concat, [1]], # cat
- [-1, 2, BottleneckCSPB, [512]], # 29
- [-1, 1, Conv, [1024, 3, 1]],
-
- [[22,26,30], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-d6.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-d6.yaml
deleted file mode 100644
index 75a8cf58bd6aebd4a66594e231c0266431a102c7..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-d6.yaml
+++ /dev/null
@@ -1,202 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7-d6 backbone
-backbone:
- # [from, number, module, args],
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [96, 3, 1]], # 1-P1/2
-
- [-1, 1, DownC, [192]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [192, 1, 1]], # 14
-
- [-1, 1, DownC, [384]], # 15-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 27
-
- [-1, 1, DownC, [768]], # 28-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [768, 1, 1]], # 40
-
- [-1, 1, DownC, [1152]], # 41-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [1152, 1, 1]], # 53
-
- [-1, 1, DownC, [1536]], # 54-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [1536, 1, 1]], # 66
- ]
-
-# yolov7-d6 head
-head:
- [[-1, 1, SPPCSPC, [768]], # 67
-
- [-1, 1, Conv, [576, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [53, 1, Conv, [576, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [576, 1, 1]], # 83
-
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [40, 1, Conv, [384, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 99
-
- [-1, 1, Conv, [192, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [27, 1, Conv, [192, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [192, 1, 1]], # 115
-
- [-1, 1, DownC, [384]],
- [[-1, 99], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 129
-
- [-1, 1, DownC, [576]],
- [[-1, 83], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [576, 1, 1]], # 143
-
- [-1, 1, DownC, [768]],
- [[-1, 67], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [768, 1, 1]], # 157
-
- [115, 1, Conv, [384, 3, 1]],
- [129, 1, Conv, [768, 3, 1]],
- [143, 1, Conv, [1152, 3, 1]],
- [157, 1, Conv, [1536, 3, 1]],
-
- [[158,159,160,161], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-e6.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-e6.yaml
deleted file mode 100644
index e6804069510071d179708c38b0976aa2c669be2a..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-e6.yaml
+++ /dev/null
@@ -1,180 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7-e6 backbone
-backbone:
- # [from, number, module, args],
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [80, 3, 1]], # 1-P1/2
-
- [-1, 1, DownC, [160]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 12
-
- [-1, 1, DownC, [320]], # 13-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 23
-
- [-1, 1, DownC, [640]], # 24-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 34
-
- [-1, 1, DownC, [960]], # 35-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [960, 1, 1]], # 45
-
- [-1, 1, DownC, [1280]], # 46-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 56
- ]
-
-# yolov7-e6 head
-head:
- [[-1, 1, SPPCSPC, [640]], # 57
-
- [-1, 1, Conv, [480, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [45, 1, Conv, [480, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 71
-
- [-1, 1, Conv, [320, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [34, 1, Conv, [320, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 85
-
- [-1, 1, Conv, [160, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [23, 1, Conv, [160, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 99
-
- [-1, 1, DownC, [320]],
- [[-1, 85], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 111
-
- [-1, 1, DownC, [480]],
- [[-1, 71], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 123
-
- [-1, 1, DownC, [640]],
- [[-1, 57], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 135
-
- [99, 1, Conv, [320, 3, 1]],
- [111, 1, Conv, [640, 3, 1]],
- [123, 1, Conv, [960, 3, 1]],
- [135, 1, Conv, [1280, 3, 1]],
-
- [[136,137,138,139], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-e6e.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-e6e.yaml
deleted file mode 100644
index 135990d8602eb02d1f9f7bfcda27c4ca4f7bd15b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-e6e.yaml
+++ /dev/null
@@ -1,301 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7-e6e backbone
-backbone:
- # [from, number, module, args],
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [80, 3, 1]], # 1-P1/2
-
- [-1, 1, DownC, [160]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 12
- [-11, 1, Conv, [64, 1, 1]],
- [-12, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 22
- [[-1, -11], 1, Shortcut, [1]], # 23
-
- [-1, 1, DownC, [320]], # 24-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 34
- [-11, 1, Conv, [128, 1, 1]],
- [-12, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 44
- [[-1, -11], 1, Shortcut, [1]], # 45
-
- [-1, 1, DownC, [640]], # 46-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 56
- [-11, 1, Conv, [256, 1, 1]],
- [-12, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 66
- [[-1, -11], 1, Shortcut, [1]], # 67
-
- [-1, 1, DownC, [960]], # 68-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [960, 1, 1]], # 78
- [-11, 1, Conv, [384, 1, 1]],
- [-12, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [960, 1, 1]], # 88
- [[-1, -11], 1, Shortcut, [1]], # 89
-
- [-1, 1, DownC, [1280]], # 90-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 100
- [-11, 1, Conv, [512, 1, 1]],
- [-12, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 110
- [[-1, -11], 1, Shortcut, [1]], # 111
- ]
-
-# yolov7-e6e head
-head:
- [[-1, 1, SPPCSPC, [640]], # 112
-
- [-1, 1, Conv, [480, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [89, 1, Conv, [480, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 126
- [-11, 1, Conv, [384, 1, 1]],
- [-12, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 136
- [[-1, -11], 1, Shortcut, [1]], # 137
-
- [-1, 1, Conv, [320, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [67, 1, Conv, [320, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 151
- [-11, 1, Conv, [256, 1, 1]],
- [-12, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 161
- [[-1, -11], 1, Shortcut, [1]], # 162
-
- [-1, 1, Conv, [160, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [45, 1, Conv, [160, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 176
- [-11, 1, Conv, [128, 1, 1]],
- [-12, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 186
- [[-1, -11], 1, Shortcut, [1]], # 187
-
- [-1, 1, DownC, [320]],
- [[-1, 162], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 199
- [-11, 1, Conv, [256, 1, 1]],
- [-12, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 209
- [[-1, -11], 1, Shortcut, [1]], # 210
-
- [-1, 1, DownC, [480]],
- [[-1, 137], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 222
- [-11, 1, Conv, [384, 1, 1]],
- [-12, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 232
- [[-1, -11], 1, Shortcut, [1]], # 233
-
- [-1, 1, DownC, [640]],
- [[-1, 112], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 245
- [-11, 1, Conv, [512, 1, 1]],
- [-12, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 255
- [[-1, -11], 1, Shortcut, [1]], # 256
-
- [187, 1, Conv, [320, 3, 1]],
- [210, 1, Conv, [640, 3, 1]],
- [233, 1, Conv, [960, 3, 1]],
- [256, 1, Conv, [1280, 3, 1]],
-
- [[257,258,259,260], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-tiny-silu.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-tiny-silu.yaml
deleted file mode 100644
index 9250573acc92b94b08eb304f00790b5bed620830..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-tiny-silu.yaml
+++ /dev/null
@@ -1,112 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# YOLOv7-tiny backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 2]], # 0-P1/2
-
- [-1, 1, Conv, [64, 3, 2]], # 1-P2/4
-
- [-1, 1, Conv, [32, 1, 1]],
- [-2, 1, Conv, [32, 1, 1]],
- [-1, 1, Conv, [32, 3, 1]],
- [-1, 1, Conv, [32, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1]], # 7
-
- [-1, 1, MP, []], # 8-P3/8
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 14
-
- [-1, 1, MP, []], # 15-P4/16
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 21
-
- [-1, 1, MP, []], # 22-P5/32
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 28
- ]
-
-# YOLOv7-tiny head
-head:
- [[-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, SP, [5]],
- [-2, 1, SP, [9]],
- [-3, 1, SP, [13]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]],
- [[-1, -7], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 37
-
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [21, 1, Conv, [128, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 47
-
- [-1, 1, Conv, [64, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [14, 1, Conv, [64, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [32, 1, 1]],
- [-2, 1, Conv, [32, 1, 1]],
- [-1, 1, Conv, [32, 3, 1]],
- [-1, 1, Conv, [32, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1]], # 57
-
- [-1, 1, Conv, [128, 3, 2]],
- [[-1, 47], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 65
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 37], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 73
-
- [57, 1, Conv, [128, 3, 1]],
- [65, 1, Conv, [256, 3, 1]],
- [73, 1, Conv, [512, 3, 1]],
-
- [[74,75,76], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-tiny.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-tiny.yaml
deleted file mode 100644
index b09f130b811e73172db3084e064c89cf73e2b8a5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-tiny.yaml
+++ /dev/null
@@ -1,112 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# yolov7-tiny backbone
-backbone:
- # [from, number, module, args] c2, k=1, s=1, p=None, g=1, act=True
- [[-1, 1, Conv, [32, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 0-P1/2
-
- [-1, 1, Conv, [64, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 1-P2/4
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 7
-
- [-1, 1, MP, []], # 8-P3/8
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 14
-
- [-1, 1, MP, []], # 15-P4/16
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 21
-
- [-1, 1, MP, []], # 22-P5/32
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 28
- ]
-
-# yolov7-tiny head
-head:
- [[-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, SP, [5]],
- [-2, 1, SP, [9]],
- [-3, 1, SP, [13]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -7], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 37
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [21, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 47
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [14, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 57
-
- [-1, 1, Conv, [128, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 47], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 65
-
- [-1, 1, Conv, [256, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 37], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 73
-
- [57, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [65, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [73, 1, Conv, [512, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
-
- [[74,75,76], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-w6.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-w6.yaml
deleted file mode 100644
index 5637a615261bf1aa735ce1385239673c4ae446c8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7-w6.yaml
+++ /dev/null
@@ -1,158 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7-w6 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [64, 3, 1]], # 1-P1/2
-
- [-1, 1, Conv, [128, 3, 2]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 10
-
- [-1, 1, Conv, [256, 3, 2]], # 11-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 19
-
- [-1, 1, Conv, [512, 3, 2]], # 20-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 28
-
- [-1, 1, Conv, [768, 3, 2]], # 29-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [768, 1, 1]], # 37
-
- [-1, 1, Conv, [1024, 3, 2]], # 38-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [1024, 1, 1]], # 46
- ]
-
-# yolov7-w6 head
-head:
- [[-1, 1, SPPCSPC, [512]], # 47
-
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [37, 1, Conv, [384, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 59
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [28, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 71
-
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [19, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 83
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 71], 1, Concat, [1]], # cat
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 93
-
- [-1, 1, Conv, [384, 3, 2]],
- [[-1, 59], 1, Concat, [1]], # cat
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 103
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 47], 1, Concat, [1]], # cat
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 113
-
- [83, 1, Conv, [256, 3, 1]],
- [93, 1, Conv, [512, 3, 1]],
- [103, 1, Conv, [768, 3, 1]],
- [113, 1, Conv, [1024, 3, 1]],
-
- [[114,115,116,117], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7.yaml
deleted file mode 100644
index 201f98da6d3c7e283a905ca8a7593f7f96e7fabc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7.yaml
+++ /dev/null
@@ -1,140 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
-
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Conv, [64, 3, 1]],
-
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 11
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [128, 1, 1]],
- [-3, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 16-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 24
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [256, 1, 1]],
- [-3, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 29-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [1024, 1, 1]], # 37
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [512, 1, 1]],
- [-3, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 42-P5/32
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [1024, 1, 1]], # 50
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [512]], # 51
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [37, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 63
-
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [24, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 75
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [128, 1, 1]],
- [-3, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 2]],
- [[-1, -3, 63], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 88
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [256, 1, 1]],
- [-3, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, -3, 51], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 101
-
- [75, 1, RepConv, [256, 3, 1]],
- [88, 1, RepConv, [512, 3, 1]],
- [101, 1, RepConv, [1024, 3, 1]],
-
- [[102,103,104], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7x.yaml b/cv/detection/yolov7/pytorch/cfg/deploy/yolov7x.yaml
deleted file mode 100644
index c1b4acce40c08baf9aa311cea1e07b4e856d93a2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/deploy/yolov7x.yaml
+++ /dev/null
@@ -1,156 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# yolov7x backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [40, 3, 1]], # 0
-
- [-1, 1, Conv, [80, 3, 2]], # 1-P1/2
- [-1, 1, Conv, [80, 3, 1]],
-
- [-1, 1, Conv, [160, 3, 2]], # 3-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 13
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [160, 1, 1]],
- [-3, 1, Conv, [160, 1, 1]],
- [-1, 1, Conv, [160, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 18-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 28
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [320, 1, 1]],
- [-3, 1, Conv, [320, 1, 1]],
- [-1, 1, Conv, [320, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 33-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 43
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [640, 1, 1]],
- [-3, 1, Conv, [640, 1, 1]],
- [-1, 1, Conv, [640, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 48-P5/32
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 58
- ]
-
-# yolov7x head
-head:
- [[-1, 1, SPPCSPC, [640]], # 59
-
- [-1, 1, Conv, [320, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [43, 1, Conv, [320, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 73
-
- [-1, 1, Conv, [160, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [28, 1, Conv, [160, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 87
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [160, 1, 1]],
- [-3, 1, Conv, [160, 1, 1]],
- [-1, 1, Conv, [160, 3, 2]],
- [[-1, -3, 73], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 102
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [320, 1, 1]],
- [-3, 1, Conv, [320, 1, 1]],
- [-1, 1, Conv, [320, 3, 2]],
- [[-1, -3, 59], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 117
-
- [87, 1, Conv, [320, 3, 1]],
- [102, 1, Conv, [640, 3, 1]],
- [117, 1, Conv, [1280, 3, 1]],
-
- [[118,119,120], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7-d6.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7-d6.yaml
deleted file mode 100644
index 4faedda4520934a36cf5633a71cffb4227d20834..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7-d6.yaml
+++ /dev/null
@@ -1,207 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args],
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [96, 3, 1]], # 1-P1/2
-
- [-1, 1, DownC, [192]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [192, 1, 1]], # 14
-
- [-1, 1, DownC, [384]], # 15-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 27
-
- [-1, 1, DownC, [768]], # 28-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [768, 1, 1]], # 40
-
- [-1, 1, DownC, [1152]], # 41-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [1152, 1, 1]], # 53
-
- [-1, 1, DownC, [1536]], # 54-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [1536, 1, 1]], # 66
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [768]], # 67
-
- [-1, 1, Conv, [576, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [53, 1, Conv, [576, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [576, 1, 1]], # 83
-
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [40, 1, Conv, [384, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 99
-
- [-1, 1, Conv, [192, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [27, 1, Conv, [192, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [192, 1, 1]], # 115
-
- [-1, 1, DownC, [384]],
- [[-1, 99], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 129
-
- [-1, 1, DownC, [576]],
- [[-1, 83], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [576, 1, 1]], # 143
-
- [-1, 1, DownC, [768]],
- [[-1, 67], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8, -9, -10], 1, Concat, [1]],
- [-1, 1, Conv, [768, 1, 1]], # 157
-
- [115, 1, Conv, [384, 3, 1]],
- [129, 1, Conv, [768, 3, 1]],
- [143, 1, Conv, [1152, 3, 1]],
- [157, 1, Conv, [1536, 3, 1]],
-
- [115, 1, Conv, [384, 3, 1]],
- [99, 1, Conv, [768, 3, 1]],
- [83, 1, Conv, [1152, 3, 1]],
- [67, 1, Conv, [1536, 3, 1]],
-
- [[158,159,160,161,162,163,164,165], 1, IAuxDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7-e6.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7-e6.yaml
deleted file mode 100644
index 58b27f097d9e1c0f334bb2522999b2c6b8022e65..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7-e6.yaml
+++ /dev/null
@@ -1,185 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args],
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [80, 3, 1]], # 1-P1/2
-
- [-1, 1, DownC, [160]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 12
-
- [-1, 1, DownC, [320]], # 13-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 23
-
- [-1, 1, DownC, [640]], # 24-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 34
-
- [-1, 1, DownC, [960]], # 35-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [960, 1, 1]], # 45
-
- [-1, 1, DownC, [1280]], # 46-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 56
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [640]], # 57
-
- [-1, 1, Conv, [480, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [45, 1, Conv, [480, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 71
-
- [-1, 1, Conv, [320, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [34, 1, Conv, [320, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 85
-
- [-1, 1, Conv, [160, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [23, 1, Conv, [160, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 99
-
- [-1, 1, DownC, [320]],
- [[-1, 85], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 111
-
- [-1, 1, DownC, [480]],
- [[-1, 71], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 123
-
- [-1, 1, DownC, [640]],
- [[-1, 57], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 135
-
- [99, 1, Conv, [320, 3, 1]],
- [111, 1, Conv, [640, 3, 1]],
- [123, 1, Conv, [960, 3, 1]],
- [135, 1, Conv, [1280, 3, 1]],
-
- [99, 1, Conv, [320, 3, 1]],
- [85, 1, Conv, [640, 3, 1]],
- [71, 1, Conv, [960, 3, 1]],
- [57, 1, Conv, [1280, 3, 1]],
-
- [[136,137,138,139,140,141,142,143], 1, IAuxDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7-e6e.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7-e6e.yaml
deleted file mode 100644
index 3c836619e6bd3e9e5585df3696dcb977995c99ff..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7-e6e.yaml
+++ /dev/null
@@ -1,306 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args],
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [80, 3, 1]], # 1-P1/2
-
- [-1, 1, DownC, [160]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 12
- [-11, 1, Conv, [64, 1, 1]],
- [-12, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 22
- [[-1, -11], 1, Shortcut, [1]], # 23
-
- [-1, 1, DownC, [320]], # 24-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 34
- [-11, 1, Conv, [128, 1, 1]],
- [-12, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 44
- [[-1, -11], 1, Shortcut, [1]], # 45
-
- [-1, 1, DownC, [640]], # 46-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 56
- [-11, 1, Conv, [256, 1, 1]],
- [-12, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 66
- [[-1, -11], 1, Shortcut, [1]], # 67
-
- [-1, 1, DownC, [960]], # 68-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [960, 1, 1]], # 78
- [-11, 1, Conv, [384, 1, 1]],
- [-12, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [960, 1, 1]], # 88
- [[-1, -11], 1, Shortcut, [1]], # 89
-
- [-1, 1, DownC, [1280]], # 90-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 100
- [-11, 1, Conv, [512, 1, 1]],
- [-12, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 110
- [[-1, -11], 1, Shortcut, [1]], # 111
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [640]], # 112
-
- [-1, 1, Conv, [480, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [89, 1, Conv, [480, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 126
- [-11, 1, Conv, [384, 1, 1]],
- [-12, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 136
- [[-1, -11], 1, Shortcut, [1]], # 137
-
- [-1, 1, Conv, [320, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [67, 1, Conv, [320, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 151
- [-11, 1, Conv, [256, 1, 1]],
- [-12, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 161
- [[-1, -11], 1, Shortcut, [1]], # 162
-
- [-1, 1, Conv, [160, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [45, 1, Conv, [160, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 176
- [-11, 1, Conv, [128, 1, 1]],
- [-12, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 186
- [[-1, -11], 1, Shortcut, [1]], # 187
-
- [-1, 1, DownC, [320]],
- [[-1, 162], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 199
- [-11, 1, Conv, [256, 1, 1]],
- [-12, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 209
- [[-1, -11], 1, Shortcut, [1]], # 210
-
- [-1, 1, DownC, [480]],
- [[-1, 137], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 222
- [-11, 1, Conv, [384, 1, 1]],
- [-12, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [480, 1, 1]], # 232
- [[-1, -11], 1, Shortcut, [1]], # 233
-
- [-1, 1, DownC, [640]],
- [[-1, 112], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 245
- [-11, 1, Conv, [512, 1, 1]],
- [-12, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 255
- [[-1, -11], 1, Shortcut, [1]], # 256
-
- [187, 1, Conv, [320, 3, 1]],
- [210, 1, Conv, [640, 3, 1]],
- [233, 1, Conv, [960, 3, 1]],
- [256, 1, Conv, [1280, 3, 1]],
-
- [186, 1, Conv, [320, 3, 1]],
- [161, 1, Conv, [640, 3, 1]],
- [136, 1, Conv, [960, 3, 1]],
- [112, 1, Conv, [1280, 3, 1]],
-
- [[257,258,259,260,261,262,263,264], 1, IAuxDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7-tiny.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7-tiny.yaml
deleted file mode 100644
index 3679b0d557a27f34b0cf915d494f61b149ade560..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7-tiny.yaml
+++ /dev/null
@@ -1,112 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [10,13, 16,30, 33,23] # P3/8
- - [30,61, 62,45, 59,119] # P4/16
- - [116,90, 156,198, 373,326] # P5/32
-
-# yolov7-tiny backbone
-backbone:
- # [from, number, module, args] c2, k=1, s=1, p=None, g=1, act=True
- [[-1, 1, Conv, [32, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 0-P1/2
-
- [-1, 1, Conv, [64, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 1-P2/4
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 7
-
- [-1, 1, MP, []], # 8-P3/8
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 14
-
- [-1, 1, MP, []], # 15-P4/16
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 21
-
- [-1, 1, MP, []], # 22-P5/32
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 28
- ]
-
-# yolov7-tiny head
-head:
- [[-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, SP, [5]],
- [-2, 1, SP, [9]],
- [-3, 1, SP, [13]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -7], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 37
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [21, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 47
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [14, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 57
-
- [-1, 1, Conv, [128, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 47], 1, Concat, [1]],
-
- [-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 65
-
- [-1, 1, Conv, [256, 3, 2, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, 37], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [[-1, -2, -3, -4], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 73
-
- [57, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [65, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
- [73, 1, Conv, [512, 3, 1, None, 1, nn.LeakyReLU(0.1)]],
-
- [[74,75,76], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7-w6.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7-w6.yaml
deleted file mode 100644
index 4b9c0131a0ec971fd64ad914fa6704ec3185e3e6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7-w6.yaml
+++ /dev/null
@@ -1,163 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [ 19,27, 44,40, 38,94 ] # P3/8
- - [ 96,68, 86,152, 180,137 ] # P4/16
- - [ 140,301, 303,264, 238,542 ] # P5/32
- - [ 436,615, 739,380, 925,792 ] # P6/64
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, ReOrg, []], # 0
- [-1, 1, Conv, [64, 3, 1]], # 1-P1/2
-
- [-1, 1, Conv, [128, 3, 2]], # 2-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 10
-
- [-1, 1, Conv, [256, 3, 2]], # 11-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 19
-
- [-1, 1, Conv, [512, 3, 2]], # 20-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 28
-
- [-1, 1, Conv, [768, 3, 2]], # 29-P5/32
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [-1, 1, Conv, [384, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [768, 1, 1]], # 37
-
- [-1, 1, Conv, [1024, 3, 2]], # 38-P6/64
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [1024, 1, 1]], # 46
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [512]], # 47
-
- [-1, 1, Conv, [384, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [37, 1, Conv, [384, 1, 1]], # route backbone P5
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 59
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [28, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 71
-
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [19, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 83
-
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, 71], 1, Concat, [1]], # cat
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 93
-
- [-1, 1, Conv, [384, 3, 2]],
- [[-1, 59], 1, Concat, [1]], # cat
-
- [-1, 1, Conv, [384, 1, 1]],
- [-2, 1, Conv, [384, 1, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [-1, 1, Conv, [192, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [384, 1, 1]], # 103
-
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, 47], 1, Concat, [1]], # cat
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 113
-
- [83, 1, Conv, [256, 3, 1]],
- [93, 1, Conv, [512, 3, 1]],
- [103, 1, Conv, [768, 3, 1]],
- [113, 1, Conv, [1024, 3, 1]],
-
- [83, 1, Conv, [320, 3, 1]],
- [71, 1, Conv, [640, 3, 1]],
- [59, 1, Conv, [960, 3, 1]],
- [47, 1, Conv, [1280, 3, 1]],
-
- [[114,115,116,117,118,119,120,121], 1, IAuxDetect, [nc, anchors]], # Detect(P3, P4, P5, P6)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7.yaml
deleted file mode 100644
index 9a807e58fb2a8b03f3eff5c97228eede0e9cdb9f..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7.yaml
+++ /dev/null
@@ -1,140 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [32, 3, 1]], # 0
-
- [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
- [-1, 1, Conv, [64, 3, 1]],
-
- [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 11
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [128, 1, 1]],
- [-3, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 16-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 24
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [256, 1, 1]],
- [-3, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 29-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [1024, 1, 1]], # 37
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [512, 1, 1]],
- [-3, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 42-P5/32
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [1024, 1, 1]], # 50
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [512]], # 51
-
- [-1, 1, Conv, [256, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [37, 1, Conv, [256, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 63
-
- [-1, 1, Conv, [128, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [24, 1, Conv, [128, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [128, 1, 1]], # 75
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [128, 1, 1]],
- [-3, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 2]],
- [[-1, -3, 63], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [256, 1, 1]], # 88
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [256, 1, 1]],
- [-3, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 2]],
- [[-1, -3, 51], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
- [-1, 1, Conv, [512, 1, 1]], # 101
-
- [75, 1, RepConv, [256, 3, 1]],
- [88, 1, RepConv, [512, 3, 1]],
- [101, 1, RepConv, [1024, 3, 1]],
-
- [[102,103,104], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/cfg/training/yolov7x.yaml b/cv/detection/yolov7/pytorch/cfg/training/yolov7x.yaml
deleted file mode 100644
index 207be886353b92ec6b3a0db915ff37bebf914961..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/cfg/training/yolov7x.yaml
+++ /dev/null
@@ -1,156 +0,0 @@
-# parameters
-nc: 80 # number of classes
-depth_multiple: 1.0 # model depth multiple
-width_multiple: 1.0 # layer channel multiple
-
-# anchors
-anchors:
- - [12,16, 19,36, 40,28] # P3/8
- - [36,75, 76,55, 72,146] # P4/16
- - [142,110, 192,243, 459,401] # P5/32
-
-# yolov7 backbone
-backbone:
- # [from, number, module, args]
- [[-1, 1, Conv, [40, 3, 1]], # 0
-
- [-1, 1, Conv, [80, 3, 2]], # 1-P1/2
- [-1, 1, Conv, [80, 3, 1]],
-
- [-1, 1, Conv, [160, 3, 2]], # 3-P2/4
- [-1, 1, Conv, [64, 1, 1]],
- [-2, 1, Conv, [64, 1, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [-1, 1, Conv, [64, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 13
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [160, 1, 1]],
- [-3, 1, Conv, [160, 1, 1]],
- [-1, 1, Conv, [160, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 18-P3/8
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 28
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [320, 1, 1]],
- [-3, 1, Conv, [320, 1, 1]],
- [-1, 1, Conv, [320, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 33-P4/16
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 43
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [640, 1, 1]],
- [-3, 1, Conv, [640, 1, 1]],
- [-1, 1, Conv, [640, 3, 2]],
- [[-1, -3], 1, Concat, [1]], # 48-P5/32
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [1280, 1, 1]], # 58
- ]
-
-# yolov7 head
-head:
- [[-1, 1, SPPCSPC, [640]], # 59
-
- [-1, 1, Conv, [320, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [43, 1, Conv, [320, 1, 1]], # route backbone P4
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 73
-
- [-1, 1, Conv, [160, 1, 1]],
- [-1, 1, nn.Upsample, [None, 2, 'nearest']],
- [28, 1, Conv, [160, 1, 1]], # route backbone P3
- [[-1, -2], 1, Concat, [1]],
-
- [-1, 1, Conv, [128, 1, 1]],
- [-2, 1, Conv, [128, 1, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [-1, 1, Conv, [128, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [160, 1, 1]], # 87
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [160, 1, 1]],
- [-3, 1, Conv, [160, 1, 1]],
- [-1, 1, Conv, [160, 3, 2]],
- [[-1, -3, 73], 1, Concat, [1]],
-
- [-1, 1, Conv, [256, 1, 1]],
- [-2, 1, Conv, [256, 1, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [-1, 1, Conv, [256, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [320, 1, 1]], # 102
-
- [-1, 1, MP, []],
- [-1, 1, Conv, [320, 1, 1]],
- [-3, 1, Conv, [320, 1, 1]],
- [-1, 1, Conv, [320, 3, 2]],
- [[-1, -3, 59], 1, Concat, [1]],
-
- [-1, 1, Conv, [512, 1, 1]],
- [-2, 1, Conv, [512, 1, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [-1, 1, Conv, [512, 3, 1]],
- [[-1, -3, -5, -7, -8], 1, Concat, [1]],
- [-1, 1, Conv, [640, 1, 1]], # 117
-
- [87, 1, Conv, [320, 3, 1]],
- [102, 1, Conv, [640, 3, 1]],
- [117, 1, Conv, [1280, 3, 1]],
-
- [[118,119,120], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
- ]
diff --git a/cv/detection/yolov7/pytorch/data/coco.yaml b/cv/detection/yolov7/pytorch/data/coco.yaml
deleted file mode 100644
index 447c792fe5610f82c0e2a1d0ad7350353fe6935d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/data/coco.yaml
+++ /dev/null
@@ -1,23 +0,0 @@
-# COCO 2017 dataset http://cocodataset.org
-
-# download command/URL (optional)
-download: bash ./scripts/get_coco.sh
-
-# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
-train: /home/datasets/cv/coco/train2017.txt # 118287 images
-val: /home/datasets/cv/coco/val2017.txt # 5000 images
-test: /home/datasets/cv/coco/val2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
-
-# number of classes
-nc: 80
-
-# class names
-names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
- 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
- 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
- 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
- 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
- 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
- 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
- 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
- 'hair drier', 'toothbrush' ]
diff --git a/cv/detection/yolov7/pytorch/data/hyp.scratch.custom.yaml b/cv/detection/yolov7/pytorch/data/hyp.scratch.custom.yaml
deleted file mode 100644
index 8570d730178ca4c6a6d102096e8512d6edc6de74..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/data/hyp.scratch.custom.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
-lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
-lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
-momentum: 0.937 # SGD momentum/Adam beta1
-weight_decay: 0.0005 # optimizer weight decay 5e-4
-warmup_epochs: 3.0 # warmup epochs (fractions ok)
-warmup_momentum: 0.8 # warmup initial momentum
-warmup_bias_lr: 0.1 # warmup initial bias lr
-box: 0.05 # box loss gain
-cls: 0.3 # cls loss gain
-cls_pw: 1.0 # cls BCELoss positive_weight
-obj: 0.7 # obj loss gain (scale with pixels)
-obj_pw: 1.0 # obj BCELoss positive_weight
-iou_t: 0.20 # IoU training threshold
-anchor_t: 4.0 # anchor-multiple threshold
-# anchors: 3 # anchors per output layer (0 to ignore)
-fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
-hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
-hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
-hsv_v: 0.4 # image HSV-Value augmentation (fraction)
-degrees: 0.0 # image rotation (+/- deg)
-translate: 0.2 # image translation (+/- fraction)
-scale: 0.5 # image scale (+/- gain)
-shear: 0.0 # image shear (+/- deg)
-perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
-flipud: 0.0 # image flip up-down (probability)
-fliplr: 0.5 # image flip left-right (probability)
-mosaic: 1.0 # image mosaic (probability)
-mixup: 0.0 # image mixup (probability)
-copy_paste: 0.0 # image copy paste (probability)
-paste_in: 0.0 # image copy paste (probability), use 0 for faster training
-loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/data/hyp.scratch.p5.yaml b/cv/detection/yolov7/pytorch/data/hyp.scratch.p5.yaml
deleted file mode 100644
index a409bac3dd5be632d71a92b27cd3adb77e7e2408..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/data/hyp.scratch.p5.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
-lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
-lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
-momentum: 0.937 # SGD momentum/Adam beta1
-weight_decay: 0.0005 # optimizer weight decay 5e-4
-warmup_epochs: 3.0 # warmup epochs (fractions ok)
-warmup_momentum: 0.8 # warmup initial momentum
-warmup_bias_lr: 0.1 # warmup initial bias lr
-box: 0.05 # box loss gain
-cls: 0.3 # cls loss gain
-cls_pw: 1.0 # cls BCELoss positive_weight
-obj: 0.7 # obj loss gain (scale with pixels)
-obj_pw: 1.0 # obj BCELoss positive_weight
-iou_t: 0.20 # IoU training threshold
-anchor_t: 4.0 # anchor-multiple threshold
-# anchors: 3 # anchors per output layer (0 to ignore)
-fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
-hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
-hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
-hsv_v: 0.4 # image HSV-Value augmentation (fraction)
-degrees: 0.0 # image rotation (+/- deg)
-translate: 0.2 # image translation (+/- fraction)
-scale: 0.9 # image scale (+/- gain)
-shear: 0.0 # image shear (+/- deg)
-perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
-flipud: 0.0 # image flip up-down (probability)
-fliplr: 0.5 # image flip left-right (probability)
-mosaic: 1.0 # image mosaic (probability)
-mixup: 0.15 # image mixup (probability)
-copy_paste: 0.0 # image copy paste (probability)
-paste_in: 0.15 # image copy paste (probability), use 0 for faster training
-loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/data/hyp.scratch.p6.yaml b/cv/detection/yolov7/pytorch/data/hyp.scratch.p6.yaml
deleted file mode 100644
index 192d0d5ddc3eeaad9c12717294d9392ae8f85059..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/data/hyp.scratch.p6.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
-lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
-lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)
-momentum: 0.937 # SGD momentum/Adam beta1
-weight_decay: 0.0005 # optimizer weight decay 5e-4
-warmup_epochs: 3.0 # warmup epochs (fractions ok)
-warmup_momentum: 0.8 # warmup initial momentum
-warmup_bias_lr: 0.1 # warmup initial bias lr
-box: 0.05 # box loss gain
-cls: 0.3 # cls loss gain
-cls_pw: 1.0 # cls BCELoss positive_weight
-obj: 0.7 # obj loss gain (scale with pixels)
-obj_pw: 1.0 # obj BCELoss positive_weight
-iou_t: 0.20 # IoU training threshold
-anchor_t: 4.0 # anchor-multiple threshold
-# anchors: 3 # anchors per output layer (0 to ignore)
-fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
-hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
-hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
-hsv_v: 0.4 # image HSV-Value augmentation (fraction)
-degrees: 0.0 # image rotation (+/- deg)
-translate: 0.2 # image translation (+/- fraction)
-scale: 0.9 # image scale (+/- gain)
-shear: 0.0 # image shear (+/- deg)
-perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
-flipud: 0.0 # image flip up-down (probability)
-fliplr: 0.5 # image flip left-right (probability)
-mosaic: 1.0 # image mosaic (probability)
-mixup: 0.15 # image mixup (probability)
-copy_paste: 0.0 # image copy paste (probability)
-paste_in: 0.15 # image copy paste (probability), use 0 for faster training
-loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/data/hyp.scratch.tiny.yaml b/cv/detection/yolov7/pytorch/data/hyp.scratch.tiny.yaml
deleted file mode 100644
index b0dc14ae1b39b3a836852427b7a4bb17320b5c41..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/data/hyp.scratch.tiny.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
-lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
-lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
-momentum: 0.937 # SGD momentum/Adam beta1
-weight_decay: 0.0005 # optimizer weight decay 5e-4
-warmup_epochs: 3.0 # warmup epochs (fractions ok)
-warmup_momentum: 0.8 # warmup initial momentum
-warmup_bias_lr: 0.1 # warmup initial bias lr
-box: 0.05 # box loss gain
-cls: 0.5 # cls loss gain
-cls_pw: 1.0 # cls BCELoss positive_weight
-obj: 1.0 # obj loss gain (scale with pixels)
-obj_pw: 1.0 # obj BCELoss positive_weight
-iou_t: 0.20 # IoU training threshold
-anchor_t: 4.0 # anchor-multiple threshold
-# anchors: 3 # anchors per output layer (0 to ignore)
-fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
-hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
-hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
-hsv_v: 0.4 # image HSV-Value augmentation (fraction)
-degrees: 0.0 # image rotation (+/- deg)
-translate: 0.1 # image translation (+/- fraction)
-scale: 0.5 # image scale (+/- gain)
-shear: 0.0 # image shear (+/- deg)
-perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
-flipud: 0.0 # image flip up-down (probability)
-fliplr: 0.5 # image flip left-right (probability)
-mosaic: 1.0 # image mosaic (probability)
-mixup: 0.05 # image mixup (probability)
-copy_paste: 0.0 # image copy paste (probability)
-paste_in: 0.05 # image copy paste (probability), use 0 for faster training
-loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
diff --git a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/README.md b/cv/detection/yolov7/pytorch/deploy/triton-inference-server/README.md
deleted file mode 100644
index 13af4daa91d5f2b9a6752840e9469743943f650e..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/README.md
+++ /dev/null
@@ -1,164 +0,0 @@
-# YOLOv7 on Triton Inference Server
-
-Instructions to deploy YOLOv7 as TensorRT engine to [Triton Inference Server](https://github.com/NVIDIA/triton-inference-server).
-
-Triton Inference Server takes care of model deployment with many out-of-the-box benefits, like a GRPC and HTTP interface, automatic scheduling on multiple GPUs, shared memory (even on GPU), dynamic server-side batching, health metrics and memory resource management.
-
-There are no additional dependencies needed to run this deployment, except a working docker daemon with GPU support.
-
-## Export TensorRT
-
-See https://github.com/WongKinYiu/yolov7#export for more info.
-
-```bash
-#install onnx-simplifier not listed in general yolov7 requirements.txt
-pip3 install onnx-simplifier
-
-# Pytorch Yolov7 -> ONNX with grid, EfficientNMS plugin and dynamic batch size
-python export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640
-# ONNX -> TensorRT with trtexec and docker
-docker run -it --rm --gpus=all nvcr.io/nvidia/tensorrt:22.06-py3
-# Copy onnx -> container: docker cp yolov7.onnx :/workspace/
-# Export with FP16 precision, min batch 1, opt batch 8 and max batch 8
-./tensorrt/bin/trtexec --onnx=yolov7.onnx --minShapes=images:1x3x640x640 --optShapes=images:8x3x640x640 --maxShapes=images:8x3x640x640 --fp16 --workspace=4096 --saveEngine=yolov7-fp16-1x8x8.engine --timingCacheFile=timing.cache
-# Test engine
-./tensorrt/bin/trtexec --loadEngine=yolov7-fp16-1x8x8.engine
-# Copy engine -> host: docker cp :/workspace/yolov7-fp16-1x8x8.engine .
-```
-
-Example output of test with RTX 3090.
-
-```
-[I] === Performance summary ===
-[I] Throughput: 73.4985 qps
-[I] Latency: min = 14.8578 ms, max = 15.8344 ms, mean = 15.07 ms, median = 15.0422 ms, percentile(99%) = 15.7443 ms
-[I] End-to-End Host Latency: min = 25.8715 ms, max = 28.4102 ms, mean = 26.672 ms, median = 26.6082 ms, percentile(99%) = 27.8314 ms
-[I] Enqueue Time: min = 0.793701 ms, max = 1.47144 ms, mean = 1.2008 ms, median = 1.28644 ms, percentile(99%) = 1.38965 ms
-[I] H2D Latency: min = 1.50073 ms, max = 1.52454 ms, mean = 1.51225 ms, median = 1.51404 ms, percentile(99%) = 1.51941 ms
-[I] GPU Compute Time: min = 13.3386 ms, max = 14.3186 ms, mean = 13.5448 ms, median = 13.5178 ms, percentile(99%) = 14.2151 ms
-[I] D2H Latency: min = 0.00878906 ms, max = 0.0172729 ms, mean = 0.0128844 ms, median = 0.0125732 ms, percentile(99%) = 0.0166016 ms
-[I] Total Host Walltime: 3.04768 s
-[I] Total GPU Compute Time: 3.03404 s
-[I] Explanations of the performance metrics are printed in the verbose logs.
-```
-Note: 73.5 qps x batch 8 = 588 fps @ ~15ms latency.
-
-## Model Repository
-
-See [Triton Model Repository Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_repository.md#model-repository) for more info.
-
-```bash
-# Create folder structure
-mkdir -p triton-deploy/models/yolov7/1/
-touch triton-deploy/models/yolov7/config.pbtxt
-# Place model
-mv yolov7-fp16-1x8x8.engine triton-deploy/models/yolov7/1/model.plan
-```
-
-## Model Configuration
-
-See [Triton Model Configuration Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#model-configuration) for more info.
-
-Minimal configuration for `triton-deploy/models/yolov7/config.pbtxt`:
-
-```
-name: "yolov7"
-platform: "tensorrt_plan"
-max_batch_size: 8
-dynamic_batching { }
-```
-
-Example repository:
-
-```bash
-$ tree triton-deploy/
-triton-deploy/
-└── models
- └── yolov7
- ├── 1
- │ └── model.plan
- └── config.pbtxt
-
-3 directories, 2 files
-```
-
-## Start Triton Inference Server
-
-```
-docker run --gpus all --rm --ipc=host --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd)/triton-deploy/models:/models nvcr.io/nvidia/tritonserver:22.06-py3 tritonserver --model-repository=/models --strict-model-config=false --log-verbose 1
-```
-
-In the log you should see:
-
-```
-+--------+---------+--------+
-| Model | Version | Status |
-+--------+---------+--------+
-| yolov7 | 1 | READY |
-+--------+---------+--------+
-```
-
-## Performance with Model Analyzer
-
-See [Triton Model Analyzer Documentation](https://github.com/triton-inference-server/server/blob/main/docs/model_analyzer.md#model-analyzer) for more info.
-
-Performance numbers @ RTX 3090 + AMD Ryzen 9 5950X
-
-Example test for 16 concurrent clients using shared memory, each with batch size 1 requests:
-
-```bash
-docker run -it --ipc=host --net=host nvcr.io/nvidia/tritonserver:22.06-py3-sdk /bin/bash
-
-./install/bin/perf_analyzer -m yolov7 -u 127.0.0.1:8001 -i grpc --shared-memory system --concurrency-range 16
-
-# Result (truncated)
-Concurrency: 16, throughput: 590.119 infer/sec, latency 27080 usec
-```
-
-Throughput for 16 clients with batch size 1 is the same as for a single thread running the engine at 16 batch size locally thanks to Triton [Dynamic Batching Strategy](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#dynamic-batcher). Result without dynamic batching (disable in model configuration) considerably worse:
-
-```bash
-# Result (truncated)
-Concurrency: 16, throughput: 335.587 infer/sec, latency 47616 usec
-```
-
-## How to run model in your code
-
-Example client can be found in client.py. It can run dummy input, images and videos.
-
-```bash
-pip3 install tritonclient[all] opencv-python
-python3 client.py image data/dog.jpg
-```
-
-
-
-```
-$ python3 client.py --help
-usage: client.py [-h] [-m MODEL] [--width WIDTH] [--height HEIGHT] [-u URL] [-o OUT] [-f FPS] [-i] [-v] [-t CLIENT_TIMEOUT] [-s] [-r ROOT_CERTIFICATES] [-p PRIVATE_KEY] [-x CERTIFICATE_CHAIN] {dummy,image,video} [input]
-
-positional arguments:
- {dummy,image,video} Run mode. 'dummy' will send an emtpy buffer to the server to test if inference works. 'image' will process an image. 'video' will process a video.
- input Input file to load from in image or video mode
-
-optional arguments:
- -h, --help show this help message and exit
- -m MODEL, --model MODEL
- Inference model name, default yolov7
- --width WIDTH Inference model input width, default 640
- --height HEIGHT Inference model input height, default 640
- -u URL, --url URL Inference server URL, default localhost:8001
- -o OUT, --out OUT Write output into file instead of displaying it
- -f FPS, --fps FPS Video output fps, default 24.0 FPS
- -i, --model-info Print model status, configuration and statistics
- -v, --verbose Enable verbose client output
- -t CLIENT_TIMEOUT, --client-timeout CLIENT_TIMEOUT
- Client timeout in seconds, default no timeout
- -s, --ssl Enable SSL encrypted channel to the server
- -r ROOT_CERTIFICATES, --root-certificates ROOT_CERTIFICATES
- File holding PEM-encoded root certificates, default none
- -p PRIVATE_KEY, --private-key PRIVATE_KEY
- File holding PEM-encoded private key, default is none
- -x CERTIFICATE_CHAIN, --certificate-chain CERTIFICATE_CHAIN
- File holding PEM-encoded certicate chain default is none
-```
diff --git a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/boundingbox.py b/cv/detection/yolov7/pytorch/deploy/triton-inference-server/boundingbox.py
deleted file mode 100644
index 8b95330b8a669e7df300066aa9b31723e055b031..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/boundingbox.py
+++ /dev/null
@@ -1,33 +0,0 @@
-class BoundingBox:
- def __init__(self, classID, confidence, x1, x2, y1, y2, image_width, image_height):
- self.classID = classID
- self.confidence = confidence
- self.x1 = x1
- self.x2 = x2
- self.y1 = y1
- self.y2 = y2
- self.u1 = x1 / image_width
- self.u2 = x2 / image_width
- self.v1 = y1 / image_height
- self.v2 = y2 / image_height
-
- def box(self):
- return (self.x1, self.y1, self.x2, self.y2)
-
- def width(self):
- return self.x2 - self.x1
-
- def height(self):
- return self.y2 - self.y1
-
- def center_absolute(self):
- return (0.5 * (self.x1 + self.x2), 0.5 * (self.y1 + self.y2))
-
- def center_normalized(self):
- return (0.5 * (self.u1 + self.u2), 0.5 * (self.v1 + self.v2))
-
- def size_absolute(self):
- return (self.x2 - self.x1, self.y2 - self.y1)
-
- def size_normalized(self):
- return (self.u2 - self.u1, self.v2 - self.v1)
diff --git a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/client.py b/cv/detection/yolov7/pytorch/deploy/triton-inference-server/client.py
deleted file mode 100644
index aedca11c76b2cf109cfd2e435a6c6764b42fa9fe..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/client.py
+++ /dev/null
@@ -1,334 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import numpy as np
-import sys
-import cv2
-
-import tritonclient.grpc as grpcclient
-from tritonclient.utils import InferenceServerException
-
-from processing import preprocess, postprocess
-from render import render_box, render_filled_box, get_text_size, render_text, RAND_COLORS
-from labels import COCOLabels
-
-INPUT_NAMES = ["images"]
-OUTPUT_NAMES = ["num_dets", "det_boxes", "det_scores", "det_classes"]
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('mode',
- choices=['dummy', 'image', 'video'],
- default='dummy',
- help='Run mode. \'dummy\' will send an emtpy buffer to the server to test if inference works. \'image\' will process an image. \'video\' will process a video.')
- parser.add_argument('input',
- type=str,
- nargs='?',
- help='Input file to load from in image or video mode')
- parser.add_argument('-m',
- '--model',
- type=str,
- required=False,
- default='yolov7',
- help='Inference model name, default yolov7')
- parser.add_argument('--width',
- type=int,
- required=False,
- default=640,
- help='Inference model input width, default 640')
- parser.add_argument('--height',
- type=int,
- required=False,
- default=640,
- help='Inference model input height, default 640')
- parser.add_argument('-u',
- '--url',
- type=str,
- required=False,
- default='localhost:8001',
- help='Inference server URL, default localhost:8001')
- parser.add_argument('-o',
- '--out',
- type=str,
- required=False,
- default='',
- help='Write output into file instead of displaying it')
- parser.add_argument('-f',
- '--fps',
- type=float,
- required=False,
- default=24.0,
- help='Video output fps, default 24.0 FPS')
- parser.add_argument('-i',
- '--model-info',
- action="store_true",
- required=False,
- default=False,
- help='Print model status, configuration and statistics')
- parser.add_argument('-v',
- '--verbose',
- action="store_true",
- required=False,
- default=False,
- help='Enable verbose client output')
- parser.add_argument('-t',
- '--client-timeout',
- type=float,
- required=False,
- default=None,
- help='Client timeout in seconds, default no timeout')
- parser.add_argument('-s',
- '--ssl',
- action="store_true",
- required=False,
- default=False,
- help='Enable SSL encrypted channel to the server')
- parser.add_argument('-r',
- '--root-certificates',
- type=str,
- required=False,
- default=None,
- help='File holding PEM-encoded root certificates, default none')
- parser.add_argument('-p',
- '--private-key',
- type=str,
- required=False,
- default=None,
- help='File holding PEM-encoded private key, default is none')
- parser.add_argument('-x',
- '--certificate-chain',
- type=str,
- required=False,
- default=None,
- help='File holding PEM-encoded certicate chain default is none')
-
- FLAGS = parser.parse_args()
-
- # Create server context
- try:
- triton_client = grpcclient.InferenceServerClient(
- url=FLAGS.url,
- verbose=FLAGS.verbose,
- ssl=FLAGS.ssl,
- root_certificates=FLAGS.root_certificates,
- private_key=FLAGS.private_key,
- certificate_chain=FLAGS.certificate_chain)
- except Exception as e:
- print("context creation failed: " + str(e))
- sys.exit()
-
- # Health check
- if not triton_client.is_server_live():
- print("FAILED : is_server_live")
- sys.exit(1)
-
- if not triton_client.is_server_ready():
- print("FAILED : is_server_ready")
- sys.exit(1)
-
- if not triton_client.is_model_ready(FLAGS.model):
- print("FAILED : is_model_ready")
- sys.exit(1)
-
- if FLAGS.model_info:
- # Model metadata
- try:
- metadata = triton_client.get_model_metadata(FLAGS.model)
- print(metadata)
- except InferenceServerException as ex:
- if "Request for unknown model" not in ex.message():
- print("FAILED : get_model_metadata")
- print("Got: {}".format(ex.message()))
- sys.exit(1)
- else:
- print("FAILED : get_model_metadata")
- sys.exit(1)
-
- # Model configuration
- try:
- config = triton_client.get_model_config(FLAGS.model)
- if not (config.config.name == FLAGS.model):
- print("FAILED: get_model_config")
- sys.exit(1)
- print(config)
- except InferenceServerException as ex:
- print("FAILED : get_model_config")
- print("Got: {}".format(ex.message()))
- sys.exit(1)
-
- # DUMMY MODE
- if FLAGS.mode == 'dummy':
- print("Running in 'dummy' mode")
- print("Creating emtpy buffer filled with ones...")
- inputs = []
- outputs = []
- inputs.append(grpcclient.InferInput(INPUT_NAMES[0], [1, 3, FLAGS.width, FLAGS.height], "FP32"))
- inputs[0].set_data_from_numpy(np.ones(shape=(1, 3, FLAGS.width, FLAGS.height), dtype=np.float32))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[0]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[1]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[2]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[3]))
-
- print("Invoking inference...")
- results = triton_client.infer(model_name=FLAGS.model,
- inputs=inputs,
- outputs=outputs,
- client_timeout=FLAGS.client_timeout)
- if FLAGS.model_info:
- statistics = triton_client.get_inference_statistics(model_name=FLAGS.model)
- if len(statistics.model_stats) != 1:
- print("FAILED: get_inference_statistics")
- sys.exit(1)
- print(statistics)
- print("Done")
-
- for output in OUTPUT_NAMES:
- result = results.as_numpy(output)
- print(f"Received result buffer \"{output}\" of size {result.shape}")
- print(f"Naive buffer sum: {np.sum(result)}")
-
- # IMAGE MODE
- if FLAGS.mode == 'image':
- print("Running in 'image' mode")
- if not FLAGS.input:
- print("FAILED: no input image")
- sys.exit(1)
-
- inputs = []
- outputs = []
- inputs.append(grpcclient.InferInput(INPUT_NAMES[0], [1, 3, FLAGS.width, FLAGS.height], "FP32"))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[0]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[1]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[2]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[3]))
-
- print("Creating buffer from image file...")
- input_image = cv2.imread(str(FLAGS.input))
- if input_image is None:
- print(f"FAILED: could not load input image {str(FLAGS.input)}")
- sys.exit(1)
- input_image_buffer = preprocess(input_image, [FLAGS.width, FLAGS.height])
- input_image_buffer = np.expand_dims(input_image_buffer, axis=0)
-
- inputs[0].set_data_from_numpy(input_image_buffer)
-
- print("Invoking inference...")
- results = triton_client.infer(model_name=FLAGS.model,
- inputs=inputs,
- outputs=outputs,
- client_timeout=FLAGS.client_timeout)
- if FLAGS.model_info:
- statistics = triton_client.get_inference_statistics(model_name=FLAGS.model)
- if len(statistics.model_stats) != 1:
- print("FAILED: get_inference_statistics")
- sys.exit(1)
- print(statistics)
- print("Done")
-
- for output in OUTPUT_NAMES:
- result = results.as_numpy(output)
- print(f"Received result buffer \"{output}\" of size {result.shape}")
- print(f"Naive buffer sum: {np.sum(result)}")
-
- num_dets = results.as_numpy(OUTPUT_NAMES[0])
- det_boxes = results.as_numpy(OUTPUT_NAMES[1])
- det_scores = results.as_numpy(OUTPUT_NAMES[2])
- det_classes = results.as_numpy(OUTPUT_NAMES[3])
- detected_objects = postprocess(num_dets, det_boxes, det_scores, det_classes, input_image.shape[1], input_image.shape[0], [FLAGS.width, FLAGS.height])
- print(f"Detected objects: {len(detected_objects)}")
-
- for box in detected_objects:
- print(f"{COCOLabels(box.classID).name}: {box.confidence}")
- input_image = render_box(input_image, box.box(), color=tuple(RAND_COLORS[box.classID % 64].tolist()))
- size = get_text_size(input_image, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", normalised_scaling=0.6)
- input_image = render_filled_box(input_image, (box.x1 - 3, box.y1 - 3, box.x1 + size[0], box.y1 + size[1]), color=(220, 220, 220))
- input_image = render_text(input_image, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", (box.x1, box.y1), color=(30, 30, 30), normalised_scaling=0.5)
-
- if FLAGS.out:
- cv2.imwrite(FLAGS.out, input_image)
- print(f"Saved result to {FLAGS.out}")
- else:
- cv2.imshow('image', input_image)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
-
- # VIDEO MODE
- if FLAGS.mode == 'video':
- print("Running in 'video' mode")
- if not FLAGS.input:
- print("FAILED: no input video")
- sys.exit(1)
-
- inputs = []
- outputs = []
- inputs.append(grpcclient.InferInput(INPUT_NAMES[0], [1, 3, FLAGS.width, FLAGS.height], "FP32"))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[0]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[1]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[2]))
- outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[3]))
-
- print("Opening input video stream...")
- cap = cv2.VideoCapture(FLAGS.input)
- if not cap.isOpened():
- print(f"FAILED: cannot open video {FLAGS.input}")
- sys.exit(1)
-
- counter = 0
- out = None
- print("Invoking inference...")
- while True:
- ret, frame = cap.read()
- if not ret:
- print("failed to fetch next frame")
- break
-
- if counter == 0 and FLAGS.out:
- print("Opening output video stream...")
- fourcc = cv2.VideoWriter_fourcc('M', 'P', '4', 'V')
- out = cv2.VideoWriter(FLAGS.out, fourcc, FLAGS.fps, (frame.shape[1], frame.shape[0]))
-
- input_image_buffer = preprocess(frame, [FLAGS.width, FLAGS.height])
- input_image_buffer = np.expand_dims(input_image_buffer, axis=0)
-
- inputs[0].set_data_from_numpy(input_image_buffer)
-
- results = triton_client.infer(model_name=FLAGS.model,
- inputs=inputs,
- outputs=outputs,
- client_timeout=FLAGS.client_timeout)
-
- num_dets = results.as_numpy("num_dets")
- det_boxes = results.as_numpy("det_boxes")
- det_scores = results.as_numpy("det_scores")
- det_classes = results.as_numpy("det_classes")
- detected_objects = postprocess(num_dets, det_boxes, det_scores, det_classes, frame.shape[1], frame.shape[0], [FLAGS.width, FLAGS.height])
- print(f"Frame {counter}: {len(detected_objects)} objects")
- counter += 1
-
- for box in detected_objects:
- print(f"{COCOLabels(box.classID).name}: {box.confidence}")
- frame = render_box(frame, box.box(), color=tuple(RAND_COLORS[box.classID % 64].tolist()))
- size = get_text_size(frame, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", normalised_scaling=0.6)
- frame = render_filled_box(frame, (box.x1 - 3, box.y1 - 3, box.x1 + size[0], box.y1 + size[1]), color=(220, 220, 220))
- frame = render_text(frame, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", (box.x1, box.y1), color=(30, 30, 30), normalised_scaling=0.5)
-
- if FLAGS.out:
- out.write(frame)
- else:
- cv2.imshow('image', frame)
- if cv2.waitKey(1) == ord('q'):
- break
-
- if FLAGS.model_info:
- statistics = triton_client.get_inference_statistics(model_name=FLAGS.model)
- if len(statistics.model_stats) != 1:
- print("FAILED: get_inference_statistics")
- sys.exit(1)
- print(statistics)
- print("Done")
-
- cap.release()
- if FLAGS.out:
- out.release()
- else:
- cv2.destroyAllWindows()
diff --git a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/labels.py b/cv/detection/yolov7/pytorch/deploy/triton-inference-server/labels.py
deleted file mode 100644
index ba6c5c516fcd1149233f34d73bb46d472a2bfed4..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/labels.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from enum import Enum
-
-class COCOLabels(Enum):
- PERSON = 0
- BICYCLE = 1
- CAR = 2
- MOTORBIKE = 3
- AEROPLANE = 4
- BUS = 5
- TRAIN = 6
- TRUCK = 7
- BOAT = 8
- TRAFFIC_LIGHT = 9
- FIRE_HYDRANT = 10
- STOP_SIGN = 11
- PARKING_METER = 12
- BENCH = 13
- BIRD = 14
- CAT = 15
- DOG = 16
- HORSE = 17
- SHEEP = 18
- COW = 19
- ELEPHANT = 20
- BEAR = 21
- ZEBRA = 22
- GIRAFFE = 23
- BACKPACK = 24
- UMBRELLA = 25
- HANDBAG = 26
- TIE = 27
- SUITCASE = 28
- FRISBEE = 29
- SKIS = 30
- SNOWBOARD = 31
- SPORTS_BALL = 32
- KITE = 33
- BASEBALL_BAT = 34
- BASEBALL_GLOVE = 35
- SKATEBOARD = 36
- SURFBOARD = 37
- TENNIS_RACKET = 38
- BOTTLE = 39
- WINE_GLASS = 40
- CUP = 41
- FORK = 42
- KNIFE = 43
- SPOON = 44
- BOWL = 45
- BANANA = 46
- APPLE = 47
- SANDWICH = 48
- ORANGE = 49
- BROCCOLI = 50
- CARROT = 51
- HOT_DOG = 52
- PIZZA = 53
- DONUT = 54
- CAKE = 55
- CHAIR = 56
- SOFA = 57
- POTTEDPLANT = 58
- BED = 59
- DININGTABLE = 60
- TOILET = 61
- TVMONITOR = 62
- LAPTOP = 63
- MOUSE = 64
- REMOTE = 65
- KEYBOARD = 66
- CELL_PHONE = 67
- MICROWAVE = 68
- OVEN = 69
- TOASTER = 70
- SINK = 71
- REFRIGERATOR = 72
- BOOK = 73
- CLOCK = 74
- VASE = 75
- SCISSORS = 76
- TEDDY_BEAR = 77
- HAIR_DRIER = 78
- TOOTHBRUSH = 79
diff --git a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/processing.py b/cv/detection/yolov7/pytorch/deploy/triton-inference-server/processing.py
deleted file mode 100644
index 3d51c50a3db50ffbfaef565c019212c316708a6b..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/processing.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from boundingbox import BoundingBox
-
-import cv2
-import numpy as np
-
-def preprocess(img, input_shape, letter_box=True):
- if letter_box:
- img_h, img_w, _ = img.shape
- new_h, new_w = input_shape[0], input_shape[1]
- offset_h, offset_w = 0, 0
- if (new_w / img_w) <= (new_h / img_h):
- new_h = int(img_h * new_w / img_w)
- offset_h = (input_shape[0] - new_h) // 2
- else:
- new_w = int(img_w * new_h / img_h)
- offset_w = (input_shape[1] - new_w) // 2
- resized = cv2.resize(img, (new_w, new_h))
- img = np.full((input_shape[0], input_shape[1], 3), 127, dtype=np.uint8)
- img[offset_h:(offset_h + new_h), offset_w:(offset_w + new_w), :] = resized
- else:
- img = cv2.resize(img, (input_shape[1], input_shape[0]))
-
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- img = img.transpose((2, 0, 1)).astype(np.float32)
- img /= 255.0
- return img
-
-def postprocess(num_dets, det_boxes, det_scores, det_classes, img_w, img_h, input_shape, letter_box=True):
- boxes = det_boxes[0, :num_dets[0][0]] / np.array([input_shape[0], input_shape[1], input_shape[0], input_shape[1]], dtype=np.float32)
- scores = det_scores[0, :num_dets[0][0]]
- classes = det_classes[0, :num_dets[0][0]].astype(np.int)
-
- old_h, old_w = img_h, img_w
- offset_h, offset_w = 0, 0
- if letter_box:
- if (img_w / input_shape[1]) >= (img_h / input_shape[0]):
- old_h = int(input_shape[0] * img_w / input_shape[1])
- offset_h = (old_h - img_h) // 2
- else:
- old_w = int(input_shape[1] * img_h / input_shape[0])
- offset_w = (old_w - img_w) // 2
-
- boxes = boxes * np.array([old_w, old_h, old_w, old_h], dtype=np.float32)
- if letter_box:
- boxes -= np.array([offset_w, offset_h, offset_w, offset_h], dtype=np.float32)
- boxes = boxes.astype(np.int)
-
- detected_objects = []
- for box, score, label in zip(boxes, scores, classes):
- detected_objects.append(BoundingBox(label, score, box[0], box[2], box[1], box[3], img_w, img_h))
- return detected_objects
diff --git a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/render.py b/cv/detection/yolov7/pytorch/deploy/triton-inference-server/render.py
deleted file mode 100644
index dea040156560ed55fed1739e95e26f4598d402f2..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/deploy/triton-inference-server/render.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import numpy as np
-
-import cv2
-
-from math import sqrt
-
-_LINE_THICKNESS_SCALING = 500.0
-
-np.random.seed(0)
-RAND_COLORS = np.random.randint(50, 255, (64, 3), "int") # used for class visu
-RAND_COLORS[0] = [220, 220, 220]
-
-def render_box(img, box, color=(200, 200, 200)):
- """
- Render a box. Calculates scaling and thickness automatically.
- :param img: image to render into
- :param box: (x1, y1, x2, y2) - box coordinates
- :param color: (b, g, r) - box color
- :return: updated image
- """
- x1, y1, x2, y2 = box
- thickness = int(
- round(
- (img.shape[0] * img.shape[1])
- / (_LINE_THICKNESS_SCALING * _LINE_THICKNESS_SCALING)
- )
- )
- thickness = max(1, thickness)
- img = cv2.rectangle(
- img,
- (int(x1), int(y1)),
- (int(x2), int(y2)),
- color,
- thickness=thickness
- )
- return img
-
-def render_filled_box(img, box, color=(200, 200, 200)):
- """
- Render a box. Calculates scaling and thickness automatically.
- :param img: image to render into
- :param box: (x1, y1, x2, y2) - box coordinates
- :param color: (b, g, r) - box color
- :return: updated image
- """
- x1, y1, x2, y2 = box
- img = cv2.rectangle(
- img,
- (int(x1), int(y1)),
- (int(x2), int(y2)),
- color,
- thickness=cv2.FILLED
- )
- return img
-
-_TEXT_THICKNESS_SCALING = 700.0
-_TEXT_SCALING = 520.0
-
-
-def get_text_size(img, text, normalised_scaling=1.0):
- """
- Get calculated text size (as box width and height)
- :param img: image reference, used to determine appropriate text scaling
- :param text: text to display
- :param normalised_scaling: additional normalised scaling. Default 1.0.
- :return: (width, height) - width and height of text box
- """
- thickness = int(
- round(
- (img.shape[0] * img.shape[1])
- / (_TEXT_THICKNESS_SCALING * _TEXT_THICKNESS_SCALING)
- )
- * normalised_scaling
- )
- thickness = max(1, thickness)
- scaling = img.shape[0] / _TEXT_SCALING * normalised_scaling
- return cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, scaling, thickness)[0]
-
-
-def render_text(img, text, pos, color=(200, 200, 200), normalised_scaling=1.0):
- """
- Render a text into the image. Calculates scaling and thickness automatically.
- :param img: image to render into
- :param text: text to display
- :param pos: (x, y) - upper left coordinates of render position
- :param color: (b, g, r) - text color
- :param normalised_scaling: additional normalised scaling. Default 1.0.
- :return: updated image
- """
- x, y = pos
- thickness = int(
- round(
- (img.shape[0] * img.shape[1])
- / (_TEXT_THICKNESS_SCALING * _TEXT_THICKNESS_SCALING)
- )
- * normalised_scaling
- )
- thickness = max(1, thickness)
- scaling = img.shape[0] / _TEXT_SCALING * normalised_scaling
- size = get_text_size(img, text, normalised_scaling)
- cv2.putText(
- img,
- text,
- (int(x), int(y + size[1])),
- cv2.FONT_HERSHEY_SIMPLEX,
- scaling,
- color,
- thickness=thickness,
- )
- return img
diff --git a/cv/detection/yolov7/pytorch/detect.py b/cv/detection/yolov7/pytorch/detect.py
deleted file mode 100644
index 5e0c4416a4672584c43e4967d27b13e045a76843..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/detect.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import argparse
-import time
-from pathlib import Path
-
-import cv2
-import torch
-import torch.backends.cudnn as cudnn
-from numpy import random
-
-from models.experimental import attempt_load
-from utils.datasets import LoadStreams, LoadImages
-from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \
- scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path
-from utils.plots import plot_one_box
-from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel
-
-
-def detect(save_img=False):
- source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace
- save_img = not opt.nosave and not source.endswith('.txt') # save inference images
- webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(
- ('rtsp://', 'rtmp://', 'http://', 'https://'))
-
- # Directories
- save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Initialize
- set_logging()
- device = select_device(opt.device)
- half = device.type != 'cpu' # half precision only supported on CUDA
-
- # Load model
- model = attempt_load(weights, map_location=device) # load FP32 model
- stride = int(model.stride.max()) # model stride
- imgsz = check_img_size(imgsz, s=stride) # check img_size
-
- if trace:
- model = TracedModel(model, device, opt.img_size)
-
- if half:
- model.half() # to FP16
-
- # Second-stage classifier
- classify = False
- if classify:
- modelc = load_classifier(name='resnet101', n=2) # initialize
- modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval()
-
- # Set Dataloader
- vid_path, vid_writer = None, None
- if webcam:
- view_img = check_imshow()
- cudnn.benchmark = True # set True to speed up constant image size inference
- dataset = LoadStreams(source, img_size=imgsz, stride=stride)
- else:
- dataset = LoadImages(source, img_size=imgsz, stride=stride)
-
- # Get names and colors
- names = model.module.names if hasattr(model, 'module') else model.names
- colors = [[random.randint(0, 255) for _ in range(3)] for _ in names]
-
- # Run inference
- if device.type != 'cpu':
- model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
- old_img_w = old_img_h = imgsz
- old_img_b = 1
-
- t0 = time.time()
- for path, img, im0s, vid_cap in dataset:
- img = torch.from_numpy(img).to(device)
- img = img.half() if half else img.float() # uint8 to fp16/32
- img /= 255.0 # 0 - 255 to 0.0 - 1.0
- if img.ndimension() == 3:
- img = img.unsqueeze(0)
-
- # Warmup
- if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]):
- old_img_b = img.shape[0]
- old_img_h = img.shape[2]
- old_img_w = img.shape[3]
- for i in range(3):
- model(img, augment=opt.augment)[0]
-
- # Inference
- t1 = time_synchronized()
- with torch.no_grad(): # Calculating gradients would cause a GPU memory leak
- pred = model(img, augment=opt.augment)[0]
- t2 = time_synchronized()
-
- # Apply NMS
- pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms)
- t3 = time_synchronized()
-
- # Apply Classifier
- if classify:
- pred = apply_classifier(pred, modelc, img, im0s)
-
- # Process detections
- for i, det in enumerate(pred): # detections per image
- if webcam: # batch_size >= 1
- p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count
- else:
- p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0)
-
- p = Path(p) # to Path
- save_path = str(save_dir / p.name) # img.jpg
- txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt
- gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
- if len(det):
- # Rescale boxes from img_size to im0 size
- det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
-
- # Print results
- for c in det[:, -1].unique():
- n = (det[:, -1] == c).sum() # detections per class
- s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
-
- # Write results
- for *xyxy, conf, cls in reversed(det):
- if save_txt: # Write to file
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if opt.save_conf else (cls, *xywh) # label format
- with open(txt_path + '.txt', 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- if save_img or view_img: # Add bbox to image
- label = f'{names[int(cls)]} {conf:.2f}'
- plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=1)
-
- # Print time (inference + NMS)
- print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS')
-
- # Stream results
- if view_img:
- cv2.imshow(str(p), im0)
- cv2.waitKey(1) # 1 millisecond
-
- # Save results (image with detections)
- if save_img:
- if dataset.mode == 'image':
- cv2.imwrite(save_path, im0)
- print(f" The image with the result is saved in: {save_path}")
- else: # 'video' or 'stream'
- if vid_path != save_path: # new video
- vid_path = save_path
- if isinstance(vid_writer, cv2.VideoWriter):
- vid_writer.release() # release previous video writer
- if vid_cap: # video
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- else: # stream
- fps, w, h = 30, im0.shape[1], im0.shape[0]
- save_path += '.mp4'
- vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
- vid_writer.write(im0)
-
- if save_txt or save_img:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- #print(f"Results saved to {save_dir}{s}")
-
- print(f'Done. ({time.time() - t0:.3f}s)')
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)')
- parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam
- parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
- parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--view-img', action='store_true', help='display results')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
- parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
- parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--update', action='store_true', help='update all models')
- parser.add_argument('--project', default='runs/detect', help='save results to project/name')
- parser.add_argument('--name', default='exp', help='save results to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--no-trace', action='store_true', help='don`t trace model')
- opt = parser.parse_args()
- print(opt)
- #check_requirements(exclude=('pycocotools', 'thop'))
-
- with torch.no_grad():
- if opt.update: # update all models (to fix SourceChangeWarning)
- for opt.weights in ['yolov7.pt']:
- detect()
- strip_optimizer(opt.weights)
- else:
- detect()
diff --git a/cv/detection/yolov7/pytorch/hubconf.py b/cv/detection/yolov7/pytorch/hubconf.py
deleted file mode 100644
index 50ff257e2a5607b0c31c77c5549ffaf6bda758b6..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/hubconf.py
+++ /dev/null
@@ -1,97 +0,0 @@
-"""PyTorch Hub models
-
-Usage:
- import torch
- model = torch.hub.load('repo', 'model')
-"""
-
-from pathlib import Path
-
-import torch
-
-from models.yolo import Model
-from utils.general import check_requirements, set_logging
-from utils.google_utils import attempt_download
-from utils.torch_utils import select_device
-
-dependencies = ['torch', 'yaml']
-check_requirements(Path(__file__).parent / 'requirements.txt', exclude=('pycocotools', 'thop'))
-set_logging()
-
-
-def create(name, pretrained, channels, classes, autoshape):
- """Creates a specified model
-
- Arguments:
- name (str): name of model, i.e. 'yolov7'
- pretrained (bool): load pretrained weights into the model
- channels (int): number of input channels
- classes (int): number of model classes
-
- Returns:
- pytorch model
- """
- try:
- cfg = list((Path(__file__).parent / 'cfg').rglob(f'{name}.yaml'))[0] # model.yaml path
- model = Model(cfg, channels, classes)
- if pretrained:
- fname = f'{name}.pt' # checkpoint filename
- attempt_download(fname) # download if not found locally
- ckpt = torch.load(fname, map_location=torch.device('cpu')) # load
- msd = model.state_dict() # model state_dict
- csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
- csd = {k: v for k, v in csd.items() if msd[k].shape == v.shape} # filter
- model.load_state_dict(csd, strict=False) # load
- if len(ckpt['model'].names) == classes:
- model.names = ckpt['model'].names # set class names attribute
- if autoshape:
- model = model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
- device = select_device('0' if torch.cuda.is_available() else 'cpu') # default to GPU if available
- return model.to(device)
-
- except Exception as e:
- s = 'Cache maybe be out of date, try force_reload=True.'
- raise Exception(s) from e
-
-
-def custom(path_or_model='path/to/model.pt', autoshape=True):
- """custom mode
-
- Arguments (3 options):
- path_or_model (str): 'path/to/model.pt'
- path_or_model (dict): torch.load('path/to/model.pt')
- path_or_model (nn.Module): torch.load('path/to/model.pt')['model']
-
- Returns:
- pytorch model
- """
- model = torch.load(path_or_model, map_location=torch.device('cpu')) if isinstance(path_or_model, str) else path_or_model # load checkpoint
- if isinstance(model, dict):
- model = model['ema' if model.get('ema') else 'model'] # load model
-
- hub_model = Model(model.yaml).to(next(model.parameters()).device) # create
- hub_model.load_state_dict(model.float().state_dict()) # load state_dict
- hub_model.names = model.names # class names
- if autoshape:
- hub_model = hub_model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
- device = select_device('0' if torch.cuda.is_available() else 'cpu') # default to GPU if available
- return hub_model.to(device)
-
-
-def yolov7(pretrained=True, channels=3, classes=80, autoshape=True):
- return create('yolov7', pretrained, channels, classes, autoshape)
-
-
-if __name__ == '__main__':
- model = custom(path_or_model='yolov7.pt') # custom example
- # model = create(name='yolov7', pretrained=True, channels=3, classes=80, autoshape=True) # pretrained example
-
- # Verify inference
- import numpy as np
- from PIL import Image
-
- imgs = [np.zeros((640, 480, 3))]
-
- results = model(imgs) # batched inference
- results.print()
- results.save()
diff --git a/cv/detection/yolov7/pytorch/models/__init__.py b/cv/detection/yolov7/pytorch/models/__init__.py
deleted file mode 100644
index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/models/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# init
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/models/common.py b/cv/detection/yolov7/pytorch/models/common.py
deleted file mode 100644
index edb5edc9fe1b0ad3b345a2103603393e74e5b65c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/models/common.py
+++ /dev/null
@@ -1,2019 +0,0 @@
-import math
-from copy import copy
-from pathlib import Path
-
-import numpy as np
-import pandas as pd
-import requests
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchvision.ops import DeformConv2d
-from PIL import Image
-from torch.cuda import amp
-
-from utils.datasets import letterbox
-from utils.general import non_max_suppression, make_divisible, scale_coords, increment_path, xyxy2xywh
-from utils.plots import color_list, plot_one_box
-from utils.torch_utils import time_synchronized
-
-
-##### basic ####
-
-def autopad(k, p=None): # kernel, padding
- # Pad to 'same'
- if p is None:
- p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
- return p
-
-
-class MP(nn.Module):
- def __init__(self, k=2):
- super(MP, self).__init__()
- self.m = nn.MaxPool2d(kernel_size=k, stride=k)
-
- def forward(self, x):
- return self.m(x)
-
-
-class SP(nn.Module):
- def __init__(self, k=3, s=1):
- super(SP, self).__init__()
- self.m = nn.MaxPool2d(kernel_size=k, stride=s, padding=k // 2)
-
- def forward(self, x):
- return self.m(x)
-
-
-class ReOrg(nn.Module):
- def __init__(self):
- super(ReOrg, self).__init__()
-
- def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
- return torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)
-
-
-class Concat(nn.Module):
- def __init__(self, dimension=1):
- super(Concat, self).__init__()
- self.d = dimension
-
- def forward(self, x):
- return torch.cat(x, self.d)
-
-
-class Chuncat(nn.Module):
- def __init__(self, dimension=1):
- super(Chuncat, self).__init__()
- self.d = dimension
-
- def forward(self, x):
- x1 = []
- x2 = []
- for xi in x:
- xi1, xi2 = xi.chunk(2, self.d)
- x1.append(xi1)
- x2.append(xi2)
- return torch.cat(x1+x2, self.d)
-
-
-class Shortcut(nn.Module):
- def __init__(self, dimension=0):
- super(Shortcut, self).__init__()
- self.d = dimension
-
- def forward(self, x):
- return x[0]+x[1]
-
-
-class Foldcut(nn.Module):
- def __init__(self, dimension=0):
- super(Foldcut, self).__init__()
- self.d = dimension
-
- def forward(self, x):
- x1, x2 = x.chunk(2, self.d)
- return x1+x2
-
-
-class Conv(nn.Module):
- # Standard convolution
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super(Conv, self).__init__()
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
-
- def forward(self, x):
- return self.act(self.bn(self.conv(x)))
-
- def fuseforward(self, x):
- return self.act(self.conv(x))
-
-
-class RobustConv(nn.Module):
- # Robust convolution (use high kernel size 7-11 for: downsampling and other layers). Train for 300 - 450 epochs.
- def __init__(self, c1, c2, k=7, s=1, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups
- super(RobustConv, self).__init__()
- self.conv_dw = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act)
- self.conv1x1 = nn.Conv2d(c1, c2, 1, 1, 0, groups=1, bias=True)
- self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None
-
- def forward(self, x):
- x = x.to(memory_format=torch.channels_last)
- x = self.conv1x1(self.conv_dw(x))
- if self.gamma is not None:
- x = x.mul(self.gamma.reshape(1, -1, 1, 1))
- return x
-
-
-class RobustConv2(nn.Module):
- # Robust convolution 2 (use [32, 5, 2] or [32, 7, 4] or [32, 11, 8] for one of the paths in CSP).
- def __init__(self, c1, c2, k=7, s=4, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups
- super(RobustConv2, self).__init__()
- self.conv_strided = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act)
- self.conv_deconv = nn.ConvTranspose2d(in_channels=c1, out_channels=c2, kernel_size=s, stride=s,
- padding=0, bias=True, dilation=1, groups=1
- )
- self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None
-
- def forward(self, x):
- x = self.conv_deconv(self.conv_strided(x))
- if self.gamma is not None:
- x = x.mul(self.gamma.reshape(1, -1, 1, 1))
- return x
-
-
-def DWConv(c1, c2, k=1, s=1, act=True):
- # Depthwise convolution
- return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
-
-
-class GhostConv(nn.Module):
- # Ghost Convolution https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
- super(GhostConv, self).__init__()
- c_ = c2 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, k, s, None, g, act)
- self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
-
- def forward(self, x):
- y = self.cv1(x)
- return torch.cat([y, self.cv2(y)], 1)
-
-
-class Stem(nn.Module):
- # Stem
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super(Stem, self).__init__()
- c_ = int(c2/2) # hidden channels
- self.cv1 = Conv(c1, c_, 3, 2)
- self.cv2 = Conv(c_, c_, 1, 1)
- self.cv3 = Conv(c_, c_, 3, 2)
- self.pool = torch.nn.MaxPool2d(2, stride=2)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
-
- def forward(self, x):
- x = self.cv1(x)
- return self.cv4(torch.cat((self.cv3(self.cv2(x)), self.pool(x)), dim=1))
-
-
-class DownC(nn.Module):
- # Spatial pyramid pooling layer used in YOLOv3-SPP
- def __init__(self, c1, c2, n=1, k=2):
- super(DownC, self).__init__()
- c_ = int(c1) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c2//2, 3, k)
- self.cv3 = Conv(c1, c2//2, 1, 1)
- self.mp = nn.MaxPool2d(kernel_size=k, stride=k)
-
- def forward(self, x):
- return torch.cat((self.cv2(self.cv1(x)), self.cv3(self.mp(x))), dim=1)
-
-
-class SPP(nn.Module):
- # Spatial pyramid pooling layer used in YOLOv3-SPP
- def __init__(self, c1, c2, k=(5, 9, 13)):
- super(SPP, self).__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
- self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
-
- def forward(self, x):
- x = self.cv1(x)
- return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
-
-
-class Bottleneck(nn.Module):
- # Darknet bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super(Bottleneck, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c2, 3, 1, g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class Res(nn.Module):
- # ResNet bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super(Res, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c_, 3, 1, g=g)
- self.cv3 = Conv(c_, c2, 1, 1)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv3(self.cv2(self.cv1(x))) if self.add else self.cv3(self.cv2(self.cv1(x)))
-
-
-class ResX(Res):
- # ResNet bottleneck
- def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__(c1, c2, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
-
-
-class Ghost(nn.Module):
- # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
- super(Ghost, self).__init__()
- c_ = c2 // 2
- self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw
- DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
- GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
- self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False),
- Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity()
-
- def forward(self, x):
- return self.conv(x) + self.shortcut(x)
-
-##### end of basic #####
-
-
-##### cspnet #####
-
-class SPPCSPC(nn.Module):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):
- super(SPPCSPC, self).__init__()
- c_ = int(2 * c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(c_, c_, 3, 1)
- self.cv4 = Conv(c_, c_, 1, 1)
- self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
- self.cv5 = Conv(4 * c_, c_, 1, 1)
- self.cv6 = Conv(c_, c_, 3, 1)
- self.cv7 = Conv(2 * c_, c2, 1, 1)
-
- def forward(self, x):
- x1 = self.cv4(self.cv3(self.cv1(x)))
- y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))
- y2 = self.cv2(x)
- return self.cv7(torch.cat((y1, y2), dim=1))
-
-class GhostSPPCSPC(SPPCSPC):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):
- super().__init__(c1, c2, n, shortcut, g, e, k)
- c_ = int(2 * c2 * e) # hidden channels
- self.cv1 = GhostConv(c1, c_, 1, 1)
- self.cv2 = GhostConv(c1, c_, 1, 1)
- self.cv3 = GhostConv(c_, c_, 3, 1)
- self.cv4 = GhostConv(c_, c_, 1, 1)
- self.cv5 = GhostConv(4 * c_, c_, 1, 1)
- self.cv6 = GhostConv(c_, c_, 3, 1)
- self.cv7 = GhostConv(2 * c_, c2, 1, 1)
-
-
-class GhostStem(Stem):
- # Stem
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__(c1, c2, k, s, p, g, act)
- c_ = int(c2/2) # hidden channels
- self.cv1 = GhostConv(c1, c_, 3, 2)
- self.cv2 = GhostConv(c_, c_, 1, 1)
- self.cv3 = GhostConv(c_, c_, 3, 2)
- self.cv4 = GhostConv(2 * c_, c2, 1, 1)
-
-
-class BottleneckCSPA(nn.Module):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(BottleneckCSPA, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1, 1)
- self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.m(self.cv1(x))
- y2 = self.cv2(x)
- return self.cv3(torch.cat((y1, y2), dim=1))
-
-
-class BottleneckCSPB(nn.Module):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(BottleneckCSPB, self).__init__()
- c_ = int(c2) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1, 1)
- self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- x1 = self.cv1(x)
- y1 = self.m(x1)
- y2 = self.cv2(x1)
- return self.cv3(torch.cat((y1, y2), dim=1))
-
-
-class BottleneckCSPC(nn.Module):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(BottleneckCSPC, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(c_, c_, 1, 1)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(torch.cat((y1, y2), dim=1))
-
-
-class ResCSPA(BottleneckCSPA):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class ResCSPB(BottleneckCSPB):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2) # hidden channels
- self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class ResCSPC(BottleneckCSPC):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class ResXCSPA(ResCSPA):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
-
-class ResXCSPB(ResCSPB):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2) # hidden channels
- self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
-
-class ResXCSPC(ResCSPC):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
-
-class GhostCSPA(BottleneckCSPA):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)])
-
-
-class GhostCSPB(BottleneckCSPB):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2) # hidden channels
- self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)])
-
-
-class GhostCSPC(BottleneckCSPC):
- # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)])
-
-##### end of cspnet #####
-
-
-##### yolor #####
-
-class ImplicitA(nn.Module):
- def __init__(self, channel, mean=0., std=.02):
- super(ImplicitA, self).__init__()
- self.channel = channel
- self.mean = mean
- self.std = std
- self.implicit = nn.Parameter(torch.zeros(1, channel, 1, 1))
- nn.init.normal_(self.implicit, mean=self.mean, std=self.std)
-
- def forward(self, x):
- return self.implicit + x
-
-
-class ImplicitM(nn.Module):
- def __init__(self, channel, mean=1., std=.02):
- super(ImplicitM, self).__init__()
- self.channel = channel
- self.mean = mean
- self.std = std
- self.implicit = nn.Parameter(torch.ones(1, channel, 1, 1))
- nn.init.normal_(self.implicit, mean=self.mean, std=self.std)
-
- def forward(self, x):
- return self.implicit * x
-
-##### end of yolor #####
-
-
-##### repvgg #####
-
-class RepConv(nn.Module):
- # Represented convolution
- # https://arxiv.org/abs/2101.03697
-
- def __init__(self, c1, c2, k=3, s=1, p=None, g=1, act=True, deploy=False):
- super(RepConv, self).__init__()
-
- self.deploy = deploy
- self.groups = g
- self.in_channels = c1
- self.out_channels = c2
-
- assert k == 3
- assert autopad(k, p) == 1
-
- padding_11 = autopad(k, p) - k // 2
-
- self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
-
- if deploy:
- self.rbr_reparam = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=True)
-
- else:
- self.rbr_identity = (nn.BatchNorm2d(num_features=c1) if c2 == c1 and s == 1 else None)
-
- self.rbr_dense = nn.Sequential(
- nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False),
- nn.BatchNorm2d(num_features=c2),
- )
-
- self.rbr_1x1 = nn.Sequential(
- nn.Conv2d( c1, c2, 1, s, padding_11, groups=g, bias=False),
- nn.BatchNorm2d(num_features=c2),
- )
-
- def forward(self, inputs):
- if hasattr(self, "rbr_reparam"):
- return self.act(self.rbr_reparam(inputs))
-
- if self.rbr_identity is None:
- id_out = 0
- else:
- id_out = self.rbr_identity(inputs)
-
- return self.act(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)
-
- def get_equivalent_kernel_bias(self):
- kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
- kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
- kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
- return (
- kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid,
- bias3x3 + bias1x1 + biasid,
- )
-
- def _pad_1x1_to_3x3_tensor(self, kernel1x1):
- if kernel1x1 is None:
- return 0
- else:
- return nn.functional.pad(kernel1x1, [1, 1, 1, 1])
-
- def _fuse_bn_tensor(self, branch):
- if branch is None:
- return 0, 0
- if isinstance(branch, nn.Sequential):
- kernel = branch[0].weight
- running_mean = branch[1].running_mean
- running_var = branch[1].running_var
- gamma = branch[1].weight
- beta = branch[1].bias
- eps = branch[1].eps
- else:
- assert isinstance(branch, nn.BatchNorm2d)
- if not hasattr(self, "id_tensor"):
- input_dim = self.in_channels // self.groups
- kernel_value = np.zeros(
- (self.in_channels, input_dim, 3, 3), dtype=np.float32
- )
- for i in range(self.in_channels):
- kernel_value[i, i % input_dim, 1, 1] = 1
- self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
- kernel = self.id_tensor
- running_mean = branch.running_mean
- running_var = branch.running_var
- gamma = branch.weight
- beta = branch.bias
- eps = branch.eps
- std = (running_var + eps).sqrt()
- t = (gamma / std).reshape(-1, 1, 1, 1)
- return kernel * t, beta - running_mean * gamma / std
-
- def repvgg_convert(self):
- kernel, bias = self.get_equivalent_kernel_bias()
- return (
- kernel.detach().cpu().numpy(),
- bias.detach().cpu().numpy(),
- )
-
- def fuse_conv_bn(self, conv, bn):
-
- std = (bn.running_var + bn.eps).sqrt()
- bias = bn.bias - bn.running_mean * bn.weight / std
-
- t = (bn.weight / std).reshape(-1, 1, 1, 1)
- weights = conv.weight * t
-
- bn = nn.Identity()
- conv = nn.Conv2d(in_channels = conv.in_channels,
- out_channels = conv.out_channels,
- kernel_size = conv.kernel_size,
- stride=conv.stride,
- padding = conv.padding,
- dilation = conv.dilation,
- groups = conv.groups,
- bias = True,
- padding_mode = conv.padding_mode)
-
- conv.weight = torch.nn.Parameter(weights)
- conv.bias = torch.nn.Parameter(bias)
- return conv
-
- def fuse_repvgg_block(self):
- if self.deploy:
- return
- print(f"RepConv.fuse_repvgg_block")
-
- self.rbr_dense = self.fuse_conv_bn(self.rbr_dense[0], self.rbr_dense[1])
-
- self.rbr_1x1 = self.fuse_conv_bn(self.rbr_1x1[0], self.rbr_1x1[1])
- rbr_1x1_bias = self.rbr_1x1.bias
- weight_1x1_expanded = torch.nn.functional.pad(self.rbr_1x1.weight, [1, 1, 1, 1])
-
- # Fuse self.rbr_identity
- if (isinstance(self.rbr_identity, nn.BatchNorm2d) or isinstance(self.rbr_identity, nn.modules.batchnorm.SyncBatchNorm)):
- # print(f"fuse: rbr_identity == BatchNorm2d or SyncBatchNorm")
- identity_conv_1x1 = nn.Conv2d(
- in_channels=self.in_channels,
- out_channels=self.out_channels,
- kernel_size=1,
- stride=1,
- padding=0,
- groups=self.groups,
- bias=False)
- identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.to(self.rbr_1x1.weight.data.device)
- identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.squeeze().squeeze()
- # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}")
- identity_conv_1x1.weight.data.fill_(0.0)
- identity_conv_1x1.weight.data.fill_diagonal_(1.0)
- identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.unsqueeze(2).unsqueeze(3)
- # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}")
-
- identity_conv_1x1 = self.fuse_conv_bn(identity_conv_1x1, self.rbr_identity)
- bias_identity_expanded = identity_conv_1x1.bias
- weight_identity_expanded = torch.nn.functional.pad(identity_conv_1x1.weight, [1, 1, 1, 1])
- else:
- # print(f"fuse: rbr_identity != BatchNorm2d, rbr_identity = {self.rbr_identity}")
- bias_identity_expanded = torch.nn.Parameter( torch.zeros_like(rbr_1x1_bias) )
- weight_identity_expanded = torch.nn.Parameter( torch.zeros_like(weight_1x1_expanded) )
-
-
- #print(f"self.rbr_1x1.weight = {self.rbr_1x1.weight.shape}, ")
- #print(f"weight_1x1_expanded = {weight_1x1_expanded.shape}, ")
- #print(f"self.rbr_dense.weight = {self.rbr_dense.weight.shape}, ")
-
- self.rbr_dense.weight = torch.nn.Parameter(self.rbr_dense.weight + weight_1x1_expanded + weight_identity_expanded)
- self.rbr_dense.bias = torch.nn.Parameter(self.rbr_dense.bias + rbr_1x1_bias + bias_identity_expanded)
-
- self.rbr_reparam = self.rbr_dense
- self.deploy = True
-
- if self.rbr_identity is not None:
- del self.rbr_identity
- self.rbr_identity = None
-
- if self.rbr_1x1 is not None:
- del self.rbr_1x1
- self.rbr_1x1 = None
-
- if self.rbr_dense is not None:
- del self.rbr_dense
- self.rbr_dense = None
-
-
-class RepBottleneck(Bottleneck):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__(c1, c2, shortcut=True, g=1, e=0.5)
- c_ = int(c2 * e) # hidden channels
- self.cv2 = RepConv(c_, c2, 3, 1, g=g)
-
-
-class RepBottleneckCSPA(BottleneckCSPA):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
-
-class RepBottleneckCSPB(BottleneckCSPB):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2) # hidden channels
- self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
-
-class RepBottleneckCSPC(BottleneckCSPC):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
-
-class RepRes(Res):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__(c1, c2, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.cv2 = RepConv(c_, c_, 3, 1, g=g)
-
-
-class RepResCSPA(ResCSPA):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class RepResCSPB(ResCSPB):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2) # hidden channels
- self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class RepResCSPC(ResCSPC):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class RepResX(ResX):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__(c1, c2, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.cv2 = RepConv(c_, c_, 3, 1, g=g)
-
-
-class RepResXCSPA(ResXCSPA):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class RepResXCSPB(ResXCSPB):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2) # hidden channels
- self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-
-class RepResXCSPC(ResXCSPC):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)])
-
-##### end of repvgg #####
-
-
-##### transformer #####
-
-class TransformerLayer(nn.Module):
- # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
- def __init__(self, c, num_heads):
- super().__init__()
- self.q = nn.Linear(c, c, bias=False)
- self.k = nn.Linear(c, c, bias=False)
- self.v = nn.Linear(c, c, bias=False)
- self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
- self.fc1 = nn.Linear(c, c, bias=False)
- self.fc2 = nn.Linear(c, c, bias=False)
-
- def forward(self, x):
- x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
- x = self.fc2(self.fc1(x)) + x
- return x
-
-
-class TransformerBlock(nn.Module):
- # Vision Transformer https://arxiv.org/abs/2010.11929
- def __init__(self, c1, c2, num_heads, num_layers):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
- self.linear = nn.Linear(c2, c2) # learnable position embedding
- self.tr = nn.Sequential(*[TransformerLayer(c2, num_heads) for _ in range(num_layers)])
- self.c2 = c2
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- b, _, w, h = x.shape
- p = x.flatten(2)
- p = p.unsqueeze(0)
- p = p.transpose(0, 3)
- p = p.squeeze(3)
- e = self.linear(p)
- x = p + e
-
- x = self.tr(x)
- x = x.unsqueeze(3)
- x = x.transpose(0, 3)
- x = x.reshape(b, self.c2, w, h)
- return x
-
-##### end of transformer #####
-
-
-##### yolov5 #####
-
-class Focus(nn.Module):
- # Focus wh information into c-space
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
- super(Focus, self).__init__()
- self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
- # self.contract = Contract(gain=2)
-
- def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
- return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
- # return self.conv(self.contract(x))
-
-
-class SPPF(nn.Module):
- # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
- def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * 4, c2, 1, 1)
- self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
-
- def forward(self, x):
- x = self.cv1(x)
- y1 = self.m(x)
- y2 = self.m(y1)
- return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
-
-
-class Contract(nn.Module):
- # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain'
- s = self.gain
- x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2)
- x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
- return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40)
-
-
-class Expand(nn.Module):
- # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- N, C, H, W = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
- s = self.gain
- x = x.view(N, s, s, C // s ** 2, H, W) # x(1,2,2,16,80,80)
- x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
- return x.view(N, C // s ** 2, H * s, W * s) # x(1,16,160,160)
-
-
-class NMS(nn.Module):
- # Non-Maximum Suppression (NMS) module
- conf = 0.25 # confidence threshold
- iou = 0.45 # IoU threshold
- classes = None # (optional list) filter by class
-
- def __init__(self):
- super(NMS, self).__init__()
-
- def forward(self, x):
- return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes)
-
-
-class autoShape(nn.Module):
- # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
- conf = 0.25 # NMS confidence threshold
- iou = 0.45 # NMS IoU threshold
- classes = None # (optional list) filter by class
-
- def __init__(self, model):
- super(autoShape, self).__init__()
- self.model = model.eval()
-
- def autoshape(self):
- print('autoShape already enabled, skipping... ') # model already converted to model.autoshape()
- return self
-
- @torch.no_grad()
- def forward(self, imgs, size=640, augment=False, profile=False):
- # Inference from various sources. For height=640, width=1280, RGB images example inputs are:
- # filename: imgs = 'data/samples/zidane.jpg'
- # URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg'
- # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
- # PIL: = Image.open('image.jpg') # HWC x(640,1280,3)
- # numpy: = np.zeros((640,1280,3)) # HWC
- # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
- # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
-
- t = [time_synchronized()]
- p = next(self.model.parameters()) # for device and type
- if isinstance(imgs, torch.Tensor): # torch
- with amp.autocast(enabled=p.device.type != 'cpu'):
- return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
-
- # Pre-process
- n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images
- shape0, shape1, files = [], [], [] # image and inference shapes, filenames
- for i, im in enumerate(imgs):
- f = f'image{i}' # filename
- if isinstance(im, str): # filename or uri
- im, f = np.asarray(Image.open(requests.get(im, stream=True).raw if im.startswith('http') else im)), im
- elif isinstance(im, Image.Image): # PIL Image
- im, f = np.asarray(im), getattr(im, 'filename', f) or f
- files.append(Path(f).with_suffix('.jpg').name)
- if im.shape[0] < 5: # image in CHW
- im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
- im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input
- s = im.shape[:2] # HWC
- shape0.append(s) # image shape
- g = (size / max(s)) # gain
- shape1.append([y * g for y in s])
- imgs[i] = im # update
- shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape
- x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad
- x = np.stack(x, 0) if n > 1 else x[0][None] # stack
- x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW
- x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32
- t.append(time_synchronized())
-
- with amp.autocast(enabled=p.device.type != 'cpu'):
- # Inference
- y = self.model(x, augment, profile)[0] # forward
- t.append(time_synchronized())
-
- # Post-process
- y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS
- for i in range(n):
- scale_coords(shape1, y[i][:, :4], shape0[i])
-
- t.append(time_synchronized())
- return Detections(imgs, y, files, t, self.names, x.shape)
-
-
-class Detections:
- # detections class for YOLOv5 inference results
- def __init__(self, imgs, pred, files, times=None, names=None, shape=None):
- super(Detections, self).__init__()
- d = pred[0].device # device
- gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations
- self.imgs = imgs # list of images as numpy arrays
- self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
- self.names = names # class names
- self.files = files # image filenames
- self.xyxy = pred # xyxy pixels
- self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
- self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
- self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
- self.n = len(self.pred) # number of images (batch size)
- self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
- self.s = shape # inference BCHW shape
-
- def display(self, pprint=False, show=False, save=False, render=False, save_dir=''):
- colors = color_list()
- for i, (img, pred) in enumerate(zip(self.imgs, self.pred)):
- str = f'image {i + 1}/{len(self.pred)}: {img.shape[0]}x{img.shape[1]} '
- if pred is not None:
- for c in pred[:, -1].unique():
- n = (pred[:, -1] == c).sum() # detections per class
- str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
- if show or save or render:
- for *box, conf, cls in pred: # xyxy, confidence, class
- label = f'{self.names[int(cls)]} {conf:.2f}'
- plot_one_box(box, img, label=label, color=colors[int(cls) % 10])
- img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np
- if pprint:
- print(str.rstrip(', '))
- if show:
- img.show(self.files[i]) # show
- if save:
- f = self.files[i]
- img.save(Path(save_dir) / f) # save
- print(f"{'Saved' * (i == 0)} {f}", end=',' if i < self.n - 1 else f' to {save_dir}\n')
- if render:
- self.imgs[i] = np.asarray(img)
-
- def print(self):
- self.display(pprint=True) # print results
- print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t)
-
- def show(self):
- self.display(show=True) # show results
-
- def save(self, save_dir='runs/hub/exp'):
- save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/hub/exp') # increment save_dir
- Path(save_dir).mkdir(parents=True, exist_ok=True)
- self.display(save=True, save_dir=save_dir) # save results
-
- def render(self):
- self.display(render=True) # render results
- return self.imgs
-
- def pandas(self):
- # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
- new = copy(self) # return copy
- ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
- cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
- for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
- a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
- setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
- return new
-
- def tolist(self):
- # return a list of Detections objects, i.e. 'for result in results.tolist():'
- x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)]
- for d in x:
- for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
- setattr(d, k, getattr(d, k)[0]) # pop out of list
- return x
-
- def __len__(self):
- return self.n
-
-
-class Classify(nn.Module):
- # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
- super(Classify, self).__init__()
- self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
- self.flat = nn.Flatten()
-
- def forward(self, x):
- z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
- return self.flat(self.conv(z)) # flatten to x(b,c2)
-
-##### end of yolov5 ######
-
-
-##### orepa #####
-
-def transI_fusebn(kernel, bn):
- gamma = bn.weight
- std = (bn.running_var + bn.eps).sqrt()
- return kernel * ((gamma / std).reshape(-1, 1, 1, 1)), bn.bias - bn.running_mean * gamma / std
-
-
-class ConvBN(nn.Module):
- def __init__(self, in_channels, out_channels, kernel_size,
- stride=1, padding=0, dilation=1, groups=1, deploy=False, nonlinear=None):
- super().__init__()
- if nonlinear is None:
- self.nonlinear = nn.Identity()
- else:
- self.nonlinear = nonlinear
- if deploy:
- self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
- stride=stride, padding=padding, dilation=dilation, groups=groups, bias=True)
- else:
- self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
- stride=stride, padding=padding, dilation=dilation, groups=groups, bias=False)
- self.bn = nn.BatchNorm2d(num_features=out_channels)
-
- def forward(self, x):
- if hasattr(self, 'bn'):
- return self.nonlinear(self.bn(self.conv(x)))
- else:
- return self.nonlinear(self.conv(x))
-
- def switch_to_deploy(self):
- kernel, bias = transI_fusebn(self.conv.weight, self.bn)
- conv = nn.Conv2d(in_channels=self.conv.in_channels, out_channels=self.conv.out_channels, kernel_size=self.conv.kernel_size,
- stride=self.conv.stride, padding=self.conv.padding, dilation=self.conv.dilation, groups=self.conv.groups, bias=True)
- conv.weight.data = kernel
- conv.bias.data = bias
- for para in self.parameters():
- para.detach_()
- self.__delattr__('conv')
- self.__delattr__('bn')
- self.conv = conv
-
-class OREPA_3x3_RepConv(nn.Module):
-
- def __init__(self, in_channels, out_channels, kernel_size,
- stride=1, padding=0, dilation=1, groups=1,
- internal_channels_1x1_3x3=None,
- deploy=False, nonlinear=None, single_init=False):
- super(OREPA_3x3_RepConv, self).__init__()
- self.deploy = deploy
-
- if nonlinear is None:
- self.nonlinear = nn.Identity()
- else:
- self.nonlinear = nonlinear
-
- self.kernel_size = kernel_size
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.groups = groups
- assert padding == kernel_size // 2
-
- self.stride = stride
- self.padding = padding
- self.dilation = dilation
-
- self.branch_counter = 0
-
- self.weight_rbr_origin = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), kernel_size, kernel_size))
- nn.init.kaiming_uniform_(self.weight_rbr_origin, a=math.sqrt(1.0))
- self.branch_counter += 1
-
-
- if groups < out_channels:
- self.weight_rbr_avg_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1))
- self.weight_rbr_pfir_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1))
- nn.init.kaiming_uniform_(self.weight_rbr_avg_conv, a=1.0)
- nn.init.kaiming_uniform_(self.weight_rbr_pfir_conv, a=1.0)
- self.weight_rbr_avg_conv.data
- self.weight_rbr_pfir_conv.data
- self.register_buffer('weight_rbr_avg_avg', torch.ones(kernel_size, kernel_size).mul(1.0/kernel_size/kernel_size))
- self.branch_counter += 1
-
- else:
- raise NotImplementedError
- self.branch_counter += 1
-
- if internal_channels_1x1_3x3 is None:
- internal_channels_1x1_3x3 = in_channels if groups < out_channels else 2 * in_channels # For mobilenet, it is better to have 2X internal channels
-
- if internal_channels_1x1_3x3 == in_channels:
- self.weight_rbr_1x1_kxk_idconv1 = nn.Parameter(torch.zeros(in_channels, int(in_channels/self.groups), 1, 1))
- id_value = np.zeros((in_channels, int(in_channels/self.groups), 1, 1))
- for i in range(in_channels):
- id_value[i, i % int(in_channels/self.groups), 0, 0] = 1
- id_tensor = torch.from_numpy(id_value).type_as(self.weight_rbr_1x1_kxk_idconv1)
- self.register_buffer('id_tensor', id_tensor)
-
- else:
- self.weight_rbr_1x1_kxk_conv1 = nn.Parameter(torch.Tensor(internal_channels_1x1_3x3, int(in_channels/self.groups), 1, 1))
- nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv1, a=math.sqrt(1.0))
- self.weight_rbr_1x1_kxk_conv2 = nn.Parameter(torch.Tensor(out_channels, int(internal_channels_1x1_3x3/self.groups), kernel_size, kernel_size))
- nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv2, a=math.sqrt(1.0))
- self.branch_counter += 1
-
- expand_ratio = 8
- self.weight_rbr_gconv_dw = nn.Parameter(torch.Tensor(in_channels*expand_ratio, 1, kernel_size, kernel_size))
- self.weight_rbr_gconv_pw = nn.Parameter(torch.Tensor(out_channels, in_channels*expand_ratio, 1, 1))
- nn.init.kaiming_uniform_(self.weight_rbr_gconv_dw, a=math.sqrt(1.0))
- nn.init.kaiming_uniform_(self.weight_rbr_gconv_pw, a=math.sqrt(1.0))
- self.branch_counter += 1
-
- if out_channels == in_channels and stride == 1:
- self.branch_counter += 1
-
- self.vector = nn.Parameter(torch.Tensor(self.branch_counter, self.out_channels))
- self.bn = nn.BatchNorm2d(out_channels)
-
- self.fre_init()
-
- nn.init.constant_(self.vector[0, :], 0.25) #origin
- nn.init.constant_(self.vector[1, :], 0.25) #avg
- nn.init.constant_(self.vector[2, :], 0.0) #prior
- nn.init.constant_(self.vector[3, :], 0.5) #1x1_kxk
- nn.init.constant_(self.vector[4, :], 0.5) #dws_conv
-
-
- def fre_init(self):
- prior_tensor = torch.Tensor(self.out_channels, self.kernel_size, self.kernel_size)
- half_fg = self.out_channels/2
- for i in range(self.out_channels):
- for h in range(3):
- for w in range(3):
- if i < half_fg:
- prior_tensor[i, h, w] = math.cos(math.pi*(h+0.5)*(i+1)/3)
- else:
- prior_tensor[i, h, w] = math.cos(math.pi*(w+0.5)*(i+1-half_fg)/3)
-
- self.register_buffer('weight_rbr_prior', prior_tensor)
-
- def weight_gen(self):
-
- weight_rbr_origin = torch.einsum('oihw,o->oihw', self.weight_rbr_origin, self.vector[0, :])
-
- weight_rbr_avg = torch.einsum('oihw,o->oihw', torch.einsum('oihw,hw->oihw', self.weight_rbr_avg_conv, self.weight_rbr_avg_avg), self.vector[1, :])
-
- weight_rbr_pfir = torch.einsum('oihw,o->oihw', torch.einsum('oihw,ohw->oihw', self.weight_rbr_pfir_conv, self.weight_rbr_prior), self.vector[2, :])
-
- weight_rbr_1x1_kxk_conv1 = None
- if hasattr(self, 'weight_rbr_1x1_kxk_idconv1'):
- weight_rbr_1x1_kxk_conv1 = (self.weight_rbr_1x1_kxk_idconv1 + self.id_tensor).squeeze()
- elif hasattr(self, 'weight_rbr_1x1_kxk_conv1'):
- weight_rbr_1x1_kxk_conv1 = self.weight_rbr_1x1_kxk_conv1.squeeze()
- else:
- raise NotImplementedError
- weight_rbr_1x1_kxk_conv2 = self.weight_rbr_1x1_kxk_conv2
-
- if self.groups > 1:
- g = self.groups
- t, ig = weight_rbr_1x1_kxk_conv1.size()
- o, tg, h, w = weight_rbr_1x1_kxk_conv2.size()
- weight_rbr_1x1_kxk_conv1 = weight_rbr_1x1_kxk_conv1.view(g, int(t/g), ig)
- weight_rbr_1x1_kxk_conv2 = weight_rbr_1x1_kxk_conv2.view(g, int(o/g), tg, h, w)
- weight_rbr_1x1_kxk = torch.einsum('gti,gothw->goihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2).view(o, ig, h, w)
- else:
- weight_rbr_1x1_kxk = torch.einsum('ti,othw->oihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2)
-
- weight_rbr_1x1_kxk = torch.einsum('oihw,o->oihw', weight_rbr_1x1_kxk, self.vector[3, :])
-
- weight_rbr_gconv = self.dwsc2full(self.weight_rbr_gconv_dw, self.weight_rbr_gconv_pw, self.in_channels)
- weight_rbr_gconv = torch.einsum('oihw,o->oihw', weight_rbr_gconv, self.vector[4, :])
-
- weight = weight_rbr_origin + weight_rbr_avg + weight_rbr_1x1_kxk + weight_rbr_pfir + weight_rbr_gconv
-
- return weight
-
- def dwsc2full(self, weight_dw, weight_pw, groups):
-
- t, ig, h, w = weight_dw.size()
- o, _, _, _ = weight_pw.size()
- tg = int(t/groups)
- i = int(ig*groups)
- weight_dw = weight_dw.view(groups, tg, ig, h, w)
- weight_pw = weight_pw.squeeze().view(o, groups, tg)
-
- weight_dsc = torch.einsum('gtihw,ogt->ogihw', weight_dw, weight_pw)
- return weight_dsc.view(o, i, h, w)
-
- def forward(self, inputs):
- weight = self.weight_gen()
- out = F.conv2d(inputs, weight, bias=None, stride=self.stride, padding=self.padding, dilation=self.dilation, groups=self.groups)
-
- return self.nonlinear(self.bn(out))
-
-class RepConv_OREPA(nn.Module):
-
- def __init__(self, c1, c2, k=3, s=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False, nonlinear=nn.SiLU()):
- super(RepConv_OREPA, self).__init__()
- self.deploy = deploy
- self.groups = groups
- self.in_channels = c1
- self.out_channels = c2
-
- self.padding = padding
- self.dilation = dilation
- self.groups = groups
-
- assert k == 3
- assert padding == 1
-
- padding_11 = padding - k // 2
-
- if nonlinear is None:
- self.nonlinearity = nn.Identity()
- else:
- self.nonlinearity = nonlinear
-
- if use_se:
- self.se = SEBlock(self.out_channels, internal_neurons=self.out_channels // 16)
- else:
- self.se = nn.Identity()
-
- if deploy:
- self.rbr_reparam = nn.Conv2d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s,
- padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
-
- else:
- self.rbr_identity = nn.BatchNorm2d(num_features=self.in_channels) if self.out_channels == self.in_channels and s == 1 else None
- self.rbr_dense = OREPA_3x3_RepConv(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, padding=padding, groups=groups, dilation=1)
- self.rbr_1x1 = ConvBN(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=1, stride=s, padding=padding_11, groups=groups, dilation=1)
- print('RepVGG Block, identity = ', self.rbr_identity)
-
-
- def forward(self, inputs):
- if hasattr(self, 'rbr_reparam'):
- return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
-
- if self.rbr_identity is None:
- id_out = 0
- else:
- id_out = self.rbr_identity(inputs)
-
- out1 = self.rbr_dense(inputs)
- out2 = self.rbr_1x1(inputs)
- out3 = id_out
- out = out1 + out2 + out3
-
- return self.nonlinearity(self.se(out))
-
-
- # Optional. This improves the accuracy and facilitates quantization.
- # 1. Cancel the original weight decay on rbr_dense.conv.weight and rbr_1x1.conv.weight.
- # 2. Use like this.
- # loss = criterion(....)
- # for every RepVGGBlock blk:
- # loss += weight_decay_coefficient * 0.5 * blk.get_cust_L2()
- # optimizer.zero_grad()
- # loss.backward()
-
- # Not used for OREPA
- def get_custom_L2(self):
- K3 = self.rbr_dense.weight_gen()
- K1 = self.rbr_1x1.conv.weight
- t3 = (self.rbr_dense.bn.weight / ((self.rbr_dense.bn.running_var + self.rbr_dense.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach()
- t1 = (self.rbr_1x1.bn.weight / ((self.rbr_1x1.bn.running_var + self.rbr_1x1.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach()
-
- l2_loss_circle = (K3 ** 2).sum() - (K3[:, :, 1:2, 1:2] ** 2).sum() # The L2 loss of the "circle" of weights in 3x3 kernel. Use regular L2 on them.
- eq_kernel = K3[:, :, 1:2, 1:2] * t3 + K1 * t1 # The equivalent resultant central point of 3x3 kernel.
- l2_loss_eq_kernel = (eq_kernel ** 2 / (t3 ** 2 + t1 ** 2)).sum() # Normalize for an L2 coefficient comparable to regular L2.
- return l2_loss_eq_kernel + l2_loss_circle
-
- def get_equivalent_kernel_bias(self):
- kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
- kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
- kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
- return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
-
- def _pad_1x1_to_3x3_tensor(self, kernel1x1):
- if kernel1x1 is None:
- return 0
- else:
- return torch.nn.functional.pad(kernel1x1, [1,1,1,1])
-
- def _fuse_bn_tensor(self, branch):
- if branch is None:
- return 0, 0
- if not isinstance(branch, nn.BatchNorm2d):
- if isinstance(branch, OREPA_3x3_RepConv):
- kernel = branch.weight_gen()
- elif isinstance(branch, ConvBN):
- kernel = branch.conv.weight
- else:
- raise NotImplementedError
- running_mean = branch.bn.running_mean
- running_var = branch.bn.running_var
- gamma = branch.bn.weight
- beta = branch.bn.bias
- eps = branch.bn.eps
- else:
- if not hasattr(self, 'id_tensor'):
- input_dim = self.in_channels // self.groups
- kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
- for i in range(self.in_channels):
- kernel_value[i, i % input_dim, 1, 1] = 1
- self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
- kernel = self.id_tensor
- running_mean = branch.running_mean
- running_var = branch.running_var
- gamma = branch.weight
- beta = branch.bias
- eps = branch.eps
- std = (running_var + eps).sqrt()
- t = (gamma / std).reshape(-1, 1, 1, 1)
- return kernel * t, beta - running_mean * gamma / std
-
- def switch_to_deploy(self):
- if hasattr(self, 'rbr_reparam'):
- return
- print(f"RepConv_OREPA.switch_to_deploy")
- kernel, bias = self.get_equivalent_kernel_bias()
- self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.in_channels, out_channels=self.rbr_dense.out_channels,
- kernel_size=self.rbr_dense.kernel_size, stride=self.rbr_dense.stride,
- padding=self.rbr_dense.padding, dilation=self.rbr_dense.dilation, groups=self.rbr_dense.groups, bias=True)
- self.rbr_reparam.weight.data = kernel
- self.rbr_reparam.bias.data = bias
- for para in self.parameters():
- para.detach_()
- self.__delattr__('rbr_dense')
- self.__delattr__('rbr_1x1')
- if hasattr(self, 'rbr_identity'):
- self.__delattr__('rbr_identity')
-
-##### end of orepa #####
-
-
-##### swin transformer #####
-
-class WindowAttention(nn.Module):
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- nn.init.normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
-
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- # print(attn.dtype, v.dtype)
- try:
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- except:
- #print(attn.dtype, v.dtype)
- x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-class Mlp(nn.Module):
-
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-def window_partition(x, window_size):
-
- B, H, W, C = x.shape
- assert H % window_size == 0, 'feature map h and w can not divide by window size'
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-def window_reverse(windows, window_size, H, W):
-
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class SwinTransformerLayer(nn.Module):
-
- def __init__(self, dim, num_heads, window_size=8, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.SiLU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- # if min(self.input_resolution) <= self.window_size:
- # # if window size is larger than input resolution, we don't partition windows
- # self.shift_size = 0
- # self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=(self.window_size, self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def create_mask(self, H, W):
- # calculate attention mask for SW-MSA
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- return attn_mask
-
- def forward(self, x):
- # reshape x[b c h w] to x[b l c]
- _, _, H_, W_ = x.shape
-
- Padding = False
- if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0:
- Padding = True
- # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.')
- pad_r = (self.window_size - W_ % self.window_size) % self.window_size
- pad_b = (self.window_size - H_ % self.window_size) % self.window_size
- x = F.pad(x, (0, pad_r, 0, pad_b))
-
- # print('2', x.shape)
- B, C, H, W = x.shape
- L = H * W
- x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c
-
- # create mask from init to forward
- if self.shift_size > 0:
- attn_mask = self.create_mask(H, W).to(x.device)
- else:
- attn_mask = None
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w
-
- if Padding:
- x = x[:, :, :H_, :W_] # reverse padding
-
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- def __init__(self, c1, c2, num_heads, num_layers, window_size=8):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
-
- # remove input_resolution
- self.blocks = nn.Sequential(*[SwinTransformerLayer(dim=c2, num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)])
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- x = self.blocks(x)
- return x
-
-
-class STCSPA(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(STCSPA, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1, 1)
- num_heads = c_ // 32
- self.m = SwinTransformerBlock(c_, c_, num_heads, n)
- #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.m(self.cv1(x))
- y2 = self.cv2(x)
- return self.cv3(torch.cat((y1, y2), dim=1))
-
-
-class STCSPB(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(STCSPB, self).__init__()
- c_ = int(c2) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1, 1)
- num_heads = c_ // 32
- self.m = SwinTransformerBlock(c_, c_, num_heads, n)
- #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- x1 = self.cv1(x)
- y1 = self.m(x1)
- y2 = self.cv2(x1)
- return self.cv3(torch.cat((y1, y2), dim=1))
-
-
-class STCSPC(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(STCSPC, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(c_, c_, 1, 1)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- num_heads = c_ // 32
- self.m = SwinTransformerBlock(c_, c_, num_heads, n)
- #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(torch.cat((y1, y2), dim=1))
-
-##### end of swin transformer #####
-
-
-##### swin transformer v2 #####
-
-class WindowAttention_v2(nn.Module):
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0.,
- pretrained_window_size=[0, 0]):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.pretrained_window_size = pretrained_window_size
- self.num_heads = num_heads
-
- self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True)
-
- # mlp to generate continuous relative position bias
- self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True),
- nn.ReLU(inplace=True),
- nn.Linear(512, num_heads, bias=False))
-
- # get relative_coords_table
- relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32)
- relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32)
- relative_coords_table = torch.stack(
- torch.meshgrid([relative_coords_h,
- relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2
- if pretrained_window_size[0] > 0:
- relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1)
- relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1)
- else:
- relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1)
- relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1)
- relative_coords_table *= 8 # normalize to -8, 8
- relative_coords_table = torch.sign(relative_coords_table) * torch.log2(
- torch.abs(relative_coords_table) + 1.0) / np.log2(8)
-
- self.register_buffer("relative_coords_table", relative_coords_table)
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=False)
- if qkv_bias:
- self.q_bias = nn.Parameter(torch.zeros(dim))
- self.v_bias = nn.Parameter(torch.zeros(dim))
- else:
- self.q_bias = None
- self.v_bias = None
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
-
- B_, N, C = x.shape
- qkv_bias = None
- if self.q_bias is not None:
- qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias))
- qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
- qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- # cosine attention
- attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1))
- logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01))).exp()
- attn = attn * logit_scale
-
- relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads)
- relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- relative_position_bias = 16 * torch.sigmoid(relative_position_bias)
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- try:
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- except:
- x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C)
-
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
- def extra_repr(self) -> str:
- return f'dim={self.dim}, window_size={self.window_size}, ' \
- f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}'
-
- def flops(self, N):
- # calculate flops for 1 window with token length of N
- flops = 0
- # qkv = self.qkv(x)
- flops += N * self.dim * 3 * self.dim
- # attn = (q @ k.transpose(-2, -1))
- flops += self.num_heads * N * (self.dim // self.num_heads) * N
- # x = (attn @ v)
- flops += self.num_heads * N * N * (self.dim // self.num_heads)
- # x = self.proj(x)
- flops += N * self.dim * self.dim
- return flops
-
-class Mlp_v2(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition_v2(x, window_size):
-
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse_v2(windows, window_size, H, W):
-
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class SwinTransformerLayer_v2(nn.Module):
-
- def __init__(self, dim, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.SiLU, norm_layer=nn.LayerNorm, pretrained_window_size=0):
- super().__init__()
- self.dim = dim
- #self.input_resolution = input_resolution
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- #if min(self.input_resolution) <= self.window_size:
- # # if window size is larger than input resolution, we don't partition windows
- # self.shift_size = 0
- # self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention_v2(
- dim, window_size=(self.window_size, self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop,
- pretrained_window_size=(pretrained_window_size, pretrained_window_size))
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp_v2(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def create_mask(self, H, W):
- # calculate attention mask for SW-MSA
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- return attn_mask
-
- def forward(self, x):
- # reshape x[b c h w] to x[b l c]
- _, _, H_, W_ = x.shape
-
- Padding = False
- if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0:
- Padding = True
- # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.')
- pad_r = (self.window_size - W_ % self.window_size) % self.window_size
- pad_b = (self.window_size - H_ % self.window_size) % self.window_size
- x = F.pad(x, (0, pad_r, 0, pad_b))
-
- # print('2', x.shape)
- B, C, H, W = x.shape
- L = H * W
- x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c
-
- # create mask from init to forward
- if self.shift_size > 0:
- attn_mask = self.create_mask(H, W).to(x.device)
- else:
- attn_mask = None
-
- shortcut = x
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition_v2(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse_v2(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
- x = shortcut + self.drop_path(self.norm1(x))
-
- # FFN
- x = x + self.drop_path(self.norm2(self.mlp(x)))
- x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w
-
- if Padding:
- x = x[:, :, :H_, :W_] # reverse padding
-
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
- f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
-
- def flops(self):
- flops = 0
- H, W = self.input_resolution
- # norm1
- flops += self.dim * H * W
- # W-MSA/SW-MSA
- nW = H * W / self.window_size / self.window_size
- flops += nW * self.attn.flops(self.window_size * self.window_size)
- # mlp
- flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
- # norm2
- flops += self.dim * H * W
- return flops
-
-
-class SwinTransformer2Block(nn.Module):
- def __init__(self, c1, c2, num_heads, num_layers, window_size=7):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
-
- # remove input_resolution
- self.blocks = nn.Sequential(*[SwinTransformerLayer_v2(dim=c2, num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)])
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- x = self.blocks(x)
- return x
-
-
-class ST2CSPA(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(ST2CSPA, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1, 1)
- num_heads = c_ // 32
- self.m = SwinTransformer2Block(c_, c_, num_heads, n)
- #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.m(self.cv1(x))
- y2 = self.cv2(x)
- return self.cv3(torch.cat((y1, y2), dim=1))
-
-
-class ST2CSPB(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(ST2CSPB, self).__init__()
- c_ = int(c2) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1, 1)
- num_heads = c_ // 32
- self.m = SwinTransformer2Block(c_, c_, num_heads, n)
- #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- x1 = self.cv1(x)
- y1 = self.m(x1)
- y2 = self.cv2(x1)
- return self.cv3(torch.cat((y1, y2), dim=1))
-
-
-class ST2CSPC(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
- super(ST2CSPC, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(c_, c_, 1, 1)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- num_heads = c_ // 32
- self.m = SwinTransformer2Block(c_, c_, num_heads, n)
- #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(torch.cat((y1, y2), dim=1))
-
-##### end of swin transformer v2 #####
diff --git a/cv/detection/yolov7/pytorch/models/experimental.py b/cv/detection/yolov7/pytorch/models/experimental.py
deleted file mode 100644
index 735d7aa0ebe7dbf3c4b062ebc3858cb5f9ebab40..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/models/experimental.py
+++ /dev/null
@@ -1,272 +0,0 @@
-import numpy as np
-import random
-import torch
-import torch.nn as nn
-
-from models.common import Conv, DWConv
-from utils.google_utils import attempt_download
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super(CrossConv, self).__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class Sum(nn.Module):
- # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
- def __init__(self, n, weight=False): # n: number of inputs
- super(Sum, self).__init__()
- self.weight = weight # apply weights boolean
- self.iter = range(n - 1) # iter object
- if weight:
- self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights
-
- def forward(self, x):
- y = x[0] # no weight
- if self.weight:
- w = torch.sigmoid(self.w) * 2
- for i in self.iter:
- y = y + x[i + 1] * w[i]
- else:
- for i in self.iter:
- y = y + x[i + 1]
- return y
-
-
-class MixConv2d(nn.Module):
- # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595
- def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True):
- super(MixConv2d, self).__init__()
- groups = len(k)
- if equal_ch: # equal c_ per group
- i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices
- c_ = [(i == g).sum() for g in range(groups)] # intermediate channels
- else: # equal weight.numel() per group
- b = [c2] + [0] * groups
- a = np.eye(groups + 1, groups, k=-1)
- a -= np.roll(a, 1, axis=1)
- a *= np.array(k) ** 2
- a[0] = 1
- c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
-
- self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)])
- self.bn = nn.BatchNorm2d(c2)
- self.act = nn.LeakyReLU(0.1, inplace=True)
-
- def forward(self, x):
- return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
-
-
-class Ensemble(nn.ModuleList):
- # Ensemble of models
- def __init__(self):
- super(Ensemble, self).__init__()
-
- def forward(self, x, augment=False):
- y = []
- for module in self:
- y.append(module(x, augment)[0])
- # y = torch.stack(y).max(0)[0] # max ensemble
- # y = torch.stack(y).mean(0) # mean ensemble
- y = torch.cat(y, 1) # nms ensemble
- return y, None # inference, train output
-
-
-
-
-
-class ORT_NMS(torch.autograd.Function):
- '''ONNX-Runtime NMS operation'''
- @staticmethod
- def forward(ctx,
- boxes,
- scores,
- max_output_boxes_per_class=torch.tensor([100]),
- iou_threshold=torch.tensor([0.45]),
- score_threshold=torch.tensor([0.25])):
- device = boxes.device
- batch = scores.shape[0]
- num_det = random.randint(0, 100)
- batches = torch.randint(0, batch, (num_det,)).sort()[0].to(device)
- idxs = torch.arange(100, 100 + num_det).to(device)
- zeros = torch.zeros((num_det,), dtype=torch.int64).to(device)
- selected_indices = torch.cat([batches[None], zeros[None], idxs[None]], 0).T.contiguous()
- selected_indices = selected_indices.to(torch.int64)
- return selected_indices
-
- @staticmethod
- def symbolic(g, boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold):
- return g.op("NonMaxSuppression", boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold)
-
-
-class TRT_NMS(torch.autograd.Function):
- '''TensorRT NMS operation'''
- @staticmethod
- def forward(
- ctx,
- boxes,
- scores,
- background_class=-1,
- box_coding=1,
- iou_threshold=0.45,
- max_output_boxes=100,
- plugin_version="1",
- score_activation=0,
- score_threshold=0.25,
- ):
- batch_size, num_boxes, num_classes = scores.shape
- num_det = torch.randint(0, max_output_boxes, (batch_size, 1), dtype=torch.int32)
- det_boxes = torch.randn(batch_size, max_output_boxes, 4)
- det_scores = torch.randn(batch_size, max_output_boxes)
- det_classes = torch.randint(0, num_classes, (batch_size, max_output_boxes), dtype=torch.int32)
- return num_det, det_boxes, det_scores, det_classes
-
- @staticmethod
- def symbolic(g,
- boxes,
- scores,
- background_class=-1,
- box_coding=1,
- iou_threshold=0.45,
- max_output_boxes=100,
- plugin_version="1",
- score_activation=0,
- score_threshold=0.25):
- out = g.op("TRT::EfficientNMS_TRT",
- boxes,
- scores,
- background_class_i=background_class,
- box_coding_i=box_coding,
- iou_threshold_f=iou_threshold,
- max_output_boxes_i=max_output_boxes,
- plugin_version_s=plugin_version,
- score_activation_i=score_activation,
- score_threshold_f=score_threshold,
- outputs=4)
- nums, boxes, scores, classes = out
- return nums, boxes, scores, classes
-
-
-class ONNX_ORT(nn.Module):
- '''onnx module with ONNX-Runtime NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=640, device=None, n_classes=80):
- super().__init__()
- self.device = device if device else torch.device("cpu")
- self.max_obj = torch.tensor([max_obj]).to(device)
- self.iou_threshold = torch.tensor([iou_thres]).to(device)
- self.score_threshold = torch.tensor([score_thres]).to(device)
- self.max_wh = max_wh # if max_wh != 0 : non-agnostic else : agnostic
- self.convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=self.device)
- self.n_classes=n_classes
-
- def forward(self, x):
- boxes = x[:, :, :4]
- conf = x[:, :, 4:5]
- scores = x[:, :, 5:]
- if self.n_classes == 1:
- scores = conf # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- scores *= conf # conf = obj_conf * cls_conf
- boxes @= self.convert_matrix
- max_score, category_id = scores.max(2, keepdim=True)
- dis = category_id.float() * self.max_wh
- nmsbox = boxes + dis
- max_score_tp = max_score.transpose(1, 2).contiguous()
- selected_indices = ORT_NMS.apply(nmsbox, max_score_tp, self.max_obj, self.iou_threshold, self.score_threshold)
- X, Y = selected_indices[:, 0], selected_indices[:, 2]
- selected_boxes = boxes[X, Y, :]
- selected_categories = category_id[X, Y, :].float()
- selected_scores = max_score[X, Y, :]
- X = X.unsqueeze(1).float()
- return torch.cat([X, selected_boxes, selected_categories, selected_scores], 1)
-
-class ONNX_TRT(nn.Module):
- '''onnx module with TensorRT NMS operation.'''
- def __init__(self, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None ,device=None, n_classes=80):
- super().__init__()
- assert max_wh is None
- self.device = device if device else torch.device('cpu')
- self.background_class = -1,
- self.box_coding = 1,
- self.iou_threshold = iou_thres
- self.max_obj = max_obj
- self.plugin_version = '1'
- self.score_activation = 0
- self.score_threshold = score_thres
- self.n_classes=n_classes
-
- def forward(self, x):
- boxes = x[:, :, :4]
- conf = x[:, :, 4:5]
- scores = x[:, :, 5:]
- if self.n_classes == 1:
- scores = conf # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- scores *= conf # conf = obj_conf * cls_conf
- num_det, det_boxes, det_scores, det_classes = TRT_NMS.apply(boxes, scores, self.background_class, self.box_coding,
- self.iou_threshold, self.max_obj,
- self.plugin_version, self.score_activation,
- self.score_threshold)
- return num_det, det_boxes, det_scores, det_classes
-
-
-class End2End(nn.Module):
- '''export onnx or tensorrt model with NMS operation.'''
- def __init__(self, model, max_obj=100, iou_thres=0.45, score_thres=0.25, max_wh=None, device=None, n_classes=80):
- super().__init__()
- device = device if device else torch.device('cpu')
- assert isinstance(max_wh,(int)) or max_wh is None
- self.model = model.to(device)
- self.model.model[-1].end2end = True
- self.patch_model = ONNX_TRT if max_wh is None else ONNX_ORT
- self.end2end = self.patch_model(max_obj, iou_thres, score_thres, max_wh, device, n_classes)
- self.end2end.eval()
-
- def forward(self, x):
- x = self.model(x)
- x = self.end2end(x)
- return x
-
-
-
-
-
-def attempt_load(weights, map_location=None):
- # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
- model = Ensemble()
- for w in weights if isinstance(weights, list) else [weights]:
- attempt_download(w)
- ckpt = torch.load(w, map_location=map_location) # load
- model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
-
- # Compatibility updates
- for m in model.modules():
- if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
- m.inplace = True # pytorch 1.7.0 compatibility
- elif type(m) is nn.Upsample:
- m.recompute_scale_factor = None # torch 1.11.0 compatibility
- elif type(m) is Conv:
- m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
-
- if len(model) == 1:
- return model[-1] # return model
- else:
- print('Ensemble created with %s\n' % weights)
- for k in ['names', 'stride']:
- setattr(model, k, getattr(model[-1], k))
- return model # return ensemble
-
-
diff --git a/cv/detection/yolov7/pytorch/models/yolo.py b/cv/detection/yolov7/pytorch/models/yolo.py
deleted file mode 100644
index 95a019c6aeec8c3f1d582907d5fe7ff3ed6b9369..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/models/yolo.py
+++ /dev/null
@@ -1,843 +0,0 @@
-import argparse
-import logging
-import sys
-from copy import deepcopy
-
-sys.path.append('./') # to run '$ python *.py' files in subdirectories
-logger = logging.getLogger(__name__)
-import torch
-from models.common import *
-from models.experimental import *
-from utils.autoanchor import check_anchor_order
-from utils.general import make_divisible, check_file, set_logging
-from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \
- select_device, copy_attr
-from utils.loss import SigmoidBin
-
-try:
- import thop # for FLOPS computation
-except ImportError:
- thop = None
-
-
-class Detect(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
- end2end = False
- include_nms = False
- concat = False
-
- def __init__(self, nc=80, anchors=(), ch=()): # detection layer
- super(Detect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy
- wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- if self.training:
- out = x
- elif self.end2end:
- out = torch.cat(z, 1)
- elif self.include_nms:
- z = self.convert(z)
- out = (z, )
- elif self.concat:
- out = torch.cat(z, 1)
- else:
- out = (torch.cat(z, 1), x)
-
- return out
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
- def convert(self, z):
- z = torch.cat(z, 1)
- box = z[:, :, :4]
- conf = z[:, :, 4:5]
- score = z[:, :, 5:]
- score *= conf
- convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=z.device)
- box @= convert_matrix
- return (box, score)
-
-
-class IDetect(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
- end2end = False
- include_nms = False
- concat = False
-
- def __init__(self, nc=80, anchors=(), ch=()): # detection layer
- super(IDetect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch)
- self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch)
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](self.ia[i](x[i])) # conv
- x[i] = self.im[i](x[i])
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- def fuseforward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy
- wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- if self.training:
- out = x
- elif self.end2end:
- out = torch.cat(z, 1)
- elif self.include_nms:
- z = self.convert(z)
- out = (z, )
- elif self.concat:
- out = torch.cat(z, 1)
- else:
- out = (torch.cat(z, 1), x)
-
- return out
-
- def fuse(self):
- print("IDetect.fuse")
- # fuse ImplicitA and Convolution
- for i in range(len(self.m)):
- c1,c2,_,_ = self.m[i].weight.shape
- c1_,c2_, _,_ = self.ia[i].implicit.shape
- self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1)
-
- # fuse ImplicitM and Convolution
- for i in range(len(self.m)):
- c1,c2, _,_ = self.im[i].implicit.shape
- self.m[i].bias *= self.im[i].implicit.reshape(c2)
- self.m[i].weight *= self.im[i].implicit.transpose(0,1)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
- def convert(self, z):
- z = torch.cat(z, 1)
- box = z[:, :, :4]
- conf = z[:, :, 4:5]
- score = z[:, :, 5:]
- score *= conf
- convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=z.device)
- box @= convert_matrix
- return (box, score)
-
-
-class IKeypoint(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
-
- def __init__(self, nc=80, anchors=(), nkpt=17, ch=(), inplace=True, dw_conv_kpt=False): # detection layer
- super(IKeypoint, self).__init__()
- self.nc = nc # number of classes
- self.nkpt = nkpt
- self.dw_conv_kpt = dw_conv_kpt
- self.no_det=(nc + 5) # number of outputs per anchor for box and class
- self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints
- self.no = self.no_det+self.no_kpt
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- self.flip_test = False
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no_det * self.na, 1) for x in ch) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch)
- self.im = nn.ModuleList(ImplicitM(self.no_det * self.na) for _ in ch)
-
- if self.nkpt is not None:
- if self.dw_conv_kpt: #keypoint head is slightly more complex
- self.m_kpt = nn.ModuleList(
- nn.Sequential(DWConv(x, x, k=3), Conv(x,x),
- DWConv(x, x, k=3), Conv(x, x),
- DWConv(x, x, k=3), Conv(x,x),
- DWConv(x, x, k=3), Conv(x, x),
- DWConv(x, x, k=3), Conv(x, x),
- DWConv(x, x, k=3), nn.Conv2d(x, self.no_kpt * self.na, 1)) for x in ch)
- else: #keypoint head is a single convolution
- self.m_kpt = nn.ModuleList(nn.Conv2d(x, self.no_kpt * self.na, 1) for x in ch)
-
- self.inplace = inplace # use in-place ops (e.g. slice assignment)
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- if self.nkpt is None or self.nkpt==0:
- x[i] = self.im[i](self.m[i](self.ia[i](x[i]))) # conv
- else :
- x[i] = torch.cat((self.im[i](self.m[i](self.ia[i](x[i]))), self.m_kpt[i](x[i])), axis=1)
-
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
- x_det = x[i][..., :6]
- x_kpt = x[i][..., 6:]
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
- kpt_grid_x = self.grid[i][..., 0:1]
- kpt_grid_y = self.grid[i][..., 1:2]
-
- if self.nkpt == 0:
- y = x[i].sigmoid()
- else:
- y = x_det.sigmoid()
-
- if self.inplace:
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh
- if self.nkpt != 0:
- x_kpt[..., 0::3] = (x_kpt[..., ::3] * 2. - 0.5 + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy
- x_kpt[..., 1::3] = (x_kpt[..., 1::3] * 2. - 0.5 + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy
- #x_kpt[..., 0::3] = (x_kpt[..., ::3] + kpt_grid_x.repeat(1,1,1,1,17)) * self.stride[i] # xy
- #x_kpt[..., 1::3] = (x_kpt[..., 1::3] + kpt_grid_y.repeat(1,1,1,1,17)) * self.stride[i] # xy
- #print('=============')
- #print(self.anchor_grid[i].shape)
- #print(self.anchor_grid[i][...,0].unsqueeze(4).shape)
- #print(x_kpt[..., 0::3].shape)
- #x_kpt[..., 0::3] = ((x_kpt[..., 0::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy
- #x_kpt[..., 1::3] = ((x_kpt[..., 1::3].tanh() * 2.) ** 3 * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy
- #x_kpt[..., 0::3] = (((x_kpt[..., 0::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,0].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_x.repeat(1,1,1,1,17) * self.stride[i] # xy
- #x_kpt[..., 1::3] = (((x_kpt[..., 1::3].sigmoid() * 4.) ** 2 - 8.) * self.anchor_grid[i][...,1].unsqueeze(4).repeat(1,1,1,1,self.nkpt)) + kpt_grid_y.repeat(1,1,1,1,17) * self.stride[i] # xy
- x_kpt[..., 2::3] = x_kpt[..., 2::3].sigmoid()
-
- y = torch.cat((xy, wh, y[..., 4:], x_kpt), dim = -1)
-
- else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- if self.nkpt != 0:
- y[..., 6:] = (y[..., 6:] * 2. - 0.5 + self.grid[i].repeat((1,1,1,1,self.nkpt))) * self.stride[i] # xy
- y = torch.cat((xy, wh, y[..., 4:]), -1)
-
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class IAuxDetect(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
- end2end = False
- include_nms = False
- concat = False
-
- def __init__(self, nc=80, anchors=(), ch=()): # detection layer
- super(IAuxDetect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[:self.nl]) # output conv
- self.m2 = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch[self.nl:]) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch[:self.nl])
- self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch[:self.nl])
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](self.ia[i](x[i])) # conv
- x[i] = self.im[i](x[i])
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- x[i+self.nl] = self.m2[i](x[i+self.nl])
- x[i+self.nl] = x[i+self.nl].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
- xy = xy * (2. * self.stride[i]) + (self.stride[i] * (self.grid[i] - 0.5)) # new xy
- wh = wh ** 2 * (4 * self.anchor_grid[i].data) # new wh
- y = torch.cat((xy, wh, conf), 4)
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x[:self.nl])
-
- def fuseforward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if not torch.onnx.is_in_onnx_export():
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else:
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].data # wh
- y = torch.cat((xy, wh, y[..., 4:]), -1)
- z.append(y.view(bs, -1, self.no))
-
- if self.training:
- out = x
- elif self.end2end:
- out = torch.cat(z, 1)
- elif self.include_nms:
- z = self.convert(z)
- out = (z, )
- elif self.concat:
- out = torch.cat(z, 1)
- else:
- out = (torch.cat(z, 1), x)
-
- return out
-
- def fuse(self):
- print("IAuxDetect.fuse")
- # fuse ImplicitA and Convolution
- for i in range(len(self.m)):
- c1,c2,_,_ = self.m[i].weight.shape
- c1_,c2_, _,_ = self.ia[i].implicit.shape
- self.m[i].bias += torch.matmul(self.m[i].weight.reshape(c1,c2),self.ia[i].implicit.reshape(c2_,c1_)).squeeze(1)
-
- # fuse ImplicitM and Convolution
- for i in range(len(self.m)):
- c1,c2, _,_ = self.im[i].implicit.shape
- self.m[i].bias *= self.im[i].implicit.reshape(c2)
- self.m[i].weight *= self.im[i].implicit.transpose(0,1)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
- def convert(self, z):
- z = torch.cat(z, 1)
- box = z[:, :, :4]
- conf = z[:, :, 4:5]
- score = z[:, :, 5:]
- score *= conf
- convert_matrix = torch.tensor([[1, 0, 1, 0], [0, 1, 0, 1], [-0.5, 0, 0.5, 0], [0, -0.5, 0, 0.5]],
- dtype=torch.float32,
- device=z.device)
- box @= convert_matrix
- return (box, score)
-
-
-class IBin(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
-
- def __init__(self, nc=80, anchors=(), ch=(), bin_count=21): # detection layer
- super(IBin, self).__init__()
- self.nc = nc # number of classes
- self.bin_count = bin_count
-
- self.w_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0)
- self.h_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0)
- # classes, x,y,obj
- self.no = nc + 3 + \
- self.w_bin_sigmoid.get_length() + self.h_bin_sigmoid.get_length() # w-bce, h-bce
- # + self.x_bin_sigmoid.get_length() + self.y_bin_sigmoid.get_length()
-
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
-
- self.ia = nn.ModuleList(ImplicitA(x) for x in ch)
- self.im = nn.ModuleList(ImplicitM(self.no * self.na) for _ in ch)
-
- def forward(self, x):
-
- #self.x_bin_sigmoid.use_fw_regression = True
- #self.y_bin_sigmoid.use_fw_regression = True
- self.w_bin_sigmoid.use_fw_regression = True
- self.h_bin_sigmoid.use_fw_regression = True
-
- # x = x.copy() # for profiling
- z = [] # inference output
- self.training |= self.export
- for i in range(self.nl):
- x[i] = self.m[i](self.ia[i](x[i])) # conv
- x[i] = self.im[i](x[i])
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4]:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- #y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
-
-
- #px = (self.x_bin_sigmoid.forward(y[..., 0:12]) + self.grid[i][..., 0]) * self.stride[i]
- #py = (self.y_bin_sigmoid.forward(y[..., 12:24]) + self.grid[i][..., 1]) * self.stride[i]
-
- pw = self.w_bin_sigmoid.forward(y[..., 2:24]) * self.anchor_grid[i][..., 0]
- ph = self.h_bin_sigmoid.forward(y[..., 24:46]) * self.anchor_grid[i][..., 1]
-
- #y[..., 0] = px
- #y[..., 1] = py
- y[..., 2] = pw
- y[..., 3] = ph
-
- y = torch.cat((y[..., 0:4], y[..., 46:]), dim=-1)
-
- z.append(y.view(bs, -1, y.shape[-1]))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class Model(nn.Module):
- def __init__(self, cfg='yolor-csp-c.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
- super(Model, self).__init__()
- self.traced = False
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg) as f:
- self.yaml = yaml.load(f, Loader=yaml.SafeLoader) # model dict
-
- # Define model
- ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
- if nc and nc != self.yaml['nc']:
- logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- if anchors:
- logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
- self.yaml['anchors'] = round(anchors) # override yaml value
- self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
- self.names = [str(i) for i in range(self.yaml['nc'])] # default names
- # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
-
- # Build strides, anchors
- m = self.model[-1] # Detect()
- if isinstance(m, Detect):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IDetect):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IAuxDetect):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))[:4]]) # forward
- #print(m.stride)
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_aux_biases() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IBin):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases_bin() # only run once
- # print('Strides: %s' % m.stride.tolist())
- if isinstance(m, IKeypoint):
- s = 256 # 2x min stride
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- check_anchor_order(m)
- m.anchors /= m.stride.view(-1, 1, 1)
- self.stride = m.stride
- self._initialize_biases_kpt() # only run once
- # print('Strides: %s' % m.stride.tolist())
-
- # Init weights, biases
- initialize_weights(self)
- self.info()
- logger.info('')
-
- def forward(self, x, augment=False, profile=False):
- if augment:
- img_size = x.shape[-2:] # height, width
- s = [1, 0.83, 0.67] # scales
- f = [None, 3, None] # flips (2-ud, 3-lr)
- y = [] # outputs
- for si, fi in zip(s, f):
- xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
- yi = self.forward_once(xi)[0] # forward
- # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
- yi[..., :4] /= si # de-scale
- if fi == 2:
- yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud
- elif fi == 3:
- yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr
- y.append(yi)
- return torch.cat(y, 1), None # augmented inference, train
- else:
- return self.forward_once(x, profile) # single-scale inference, train
-
- def forward_once(self, x, profile=False):
- y, dt = [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
-
- if not hasattr(self, 'traced'):
- self.traced=False
-
- if self.traced:
- if isinstance(m, Detect) or isinstance(m, IDetect) or isinstance(m, IAuxDetect) or isinstance(m, IKeypoint):
- break
-
- if profile:
- c = isinstance(m, (Detect, IDetect, IAuxDetect, IBin))
- o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS
- for _ in range(10):
- m(x.copy() if c else x)
- t = time_synchronized()
- for _ in range(10):
- m(x.copy() if c else x)
- dt.append((time_synchronized() - t) * 100)
- print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type))
-
- x = m(x) # run
-
- y.append(x if m.i in self.save else None) # save output
-
- if profile:
- print('%.1fms total' % sum(dt))
- return x
-
- def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _initialize_aux_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, mi2, s in zip(m.m, m.m2, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
- b2 = mi2.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b2.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b2.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi2.bias = torch.nn.Parameter(b2.view(-1), requires_grad=True)
-
- def _initialize_biases_bin(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Bin() module
- bc = m.bin_count
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- old = b[:, (0,1,2,bc+3)].data
- obj_idx = 2*bc+4
- b[:, :obj_idx].data += math.log(0.6 / (bc + 1 - 0.99))
- b[:, obj_idx].data += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b[:, (obj_idx+1):].data += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- b[:, (0,1,2,bc+3)].data = old
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _initialize_biases_kpt(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _print_biases(self):
- m = self.model[-1] # Detect() module
- for mi in m.m: # from
- b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
- print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
-
- # def _print_weights(self):
- # for m in self.model.modules():
- # if type(m) is Bottleneck:
- # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
-
- def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
- print('Fusing layers... ')
- for m in self.model.modules():
- if isinstance(m, RepConv):
- #print(f" fuse_repvgg_block")
- m.fuse_repvgg_block()
- elif isinstance(m, RepConv_OREPA):
- #print(f" switch_to_deploy")
- m.switch_to_deploy()
- elif type(m) is Conv and hasattr(m, 'bn'):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, 'bn') # remove batchnorm
- m.forward = m.fuseforward # update forward
- elif isinstance(m, (IDetect, IAuxDetect)):
- m.fuse()
- m.forward = m.fuseforward
- self.info()
- return self
-
- def nms(self, mode=True): # add or remove NMS module
- present = type(self.model[-1]) is NMS # last layer is NMS
- if mode and not present:
- print('Adding NMS... ')
- m = NMS() # module
- m.f = -1 # from
- m.i = self.model[-1].i + 1 # index
- self.model.add_module(name='%s' % m.i, module=m) # add
- self.eval()
- elif not mode and present:
- print('Removing NMS... ')
- self.model = self.model[:-1] # remove
- return self
-
- def autoshape(self): # add autoShape module
- print('Adding autoShape... ')
- m = autoShape(self) # wrap model
- copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
- return m
-
- def info(self, verbose=False, img_size=640): # print model information
- model_info(self, verbose, img_size)
-
-
-def parse_model(d, ch): # model_dict, input_channels(3)
- logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
- anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- try:
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
- except:
- pass
-
- n = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in [nn.Conv2d, Conv, RobustConv, RobustConv2, DWConv, GhostConv, RepConv, RepConv_OREPA, DownC,
- SPP, SPPF, SPPCSPC, GhostSPPCSPC, MixConv2d, Focus, Stem, GhostStem, CrossConv,
- Bottleneck, BottleneckCSPA, BottleneckCSPB, BottleneckCSPC,
- RepBottleneck, RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC,
- Res, ResCSPA, ResCSPB, ResCSPC,
- RepRes, RepResCSPA, RepResCSPB, RepResCSPC,
- ResX, ResXCSPA, ResXCSPB, ResXCSPC,
- RepResX, RepResXCSPA, RepResXCSPB, RepResXCSPC,
- Ghost, GhostCSPA, GhostCSPB, GhostCSPC,
- SwinTransformerBlock, STCSPA, STCSPB, STCSPC,
- SwinTransformer2Block, ST2CSPA, ST2CSPB, ST2CSPC]:
- c1, c2 = ch[f], args[0]
- if c2 != no: # if not output
- c2 = make_divisible(c2 * gw, 8)
-
- args = [c1, c2, *args[1:]]
- if m in [DownC, SPPCSPC, GhostSPPCSPC,
- BottleneckCSPA, BottleneckCSPB, BottleneckCSPC,
- RepBottleneckCSPA, RepBottleneckCSPB, RepBottleneckCSPC,
- ResCSPA, ResCSPB, ResCSPC,
- RepResCSPA, RepResCSPB, RepResCSPC,
- ResXCSPA, ResXCSPB, ResXCSPC,
- RepResXCSPA, RepResXCSPB, RepResXCSPC,
- GhostCSPA, GhostCSPB, GhostCSPC,
- STCSPA, STCSPB, STCSPC,
- ST2CSPA, ST2CSPB, ST2CSPC]:
- args.insert(2, n) # number of repeats
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum([ch[x] for x in f])
- elif m is Chuncat:
- c2 = sum([ch[x] for x in f])
- elif m is Shortcut:
- c2 = ch[f[0]]
- elif m is Foldcut:
- c2 = ch[f] // 2
- elif m in [Detect, IDetect, IAuxDetect, IBin, IKeypoint]:
- args.append([ch[x] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- elif m is ReOrg:
- c2 = ch[f] * 4
- elif m is Contract:
- c2 = ch[f] * args[0] ** 2
- elif m is Expand:
- c2 = ch[f] // args[0] ** 2
- else:
- c2 = ch[f]
-
- m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum([x.numel() for x in m_.parameters()]) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- ch.append(c2)
- return nn.Sequential(*layers), sorted(save)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--cfg', type=str, default='yolor-csp-c.yaml', help='model.yaml')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--profile', action='store_true', help='profile model speed')
- opt = parser.parse_args()
- opt.cfg = check_file(opt.cfg) # check file
- set_logging()
- device = select_device(opt.device)
-
- # Create model
- model = Model(opt.cfg).to(device)
- model.train()
-
- if opt.profile:
- img = torch.rand(1, 3, 640, 640).to(device)
- y = model(img, profile=True)
-
- # Profile
- # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 640, 640).to(device)
- # y = model(img, profile=True)
-
- # Tensorboard
- # from torch.utils.tensorboard import SummaryWriter
- # tb_writer = SummaryWriter()
- # print("Run 'tensorboard --logdir=models/runs' to view tensorboard at http://localhost:6006/")
- # tb_writer.add_graph(model.model, img) # add model to tensorboard
- # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard
diff --git a/cv/detection/yolov7/pytorch/requirements.txt b/cv/detection/yolov7/pytorch/requirements.txt
deleted file mode 100644
index f4d218218aaa36299f23e8664cba5a25c8f0a184..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/requirements.txt
+++ /dev/null
@@ -1,39 +0,0 @@
-# Usage: pip install -r requirements.txt
-
-# Base ----------------------------------------
-matplotlib>=3.2.2
-numpy>=1.18.5,<1.24.0
-opencv-python>=4.1.1
-Pillow>=7.1.2
-PyYAML>=5.3.1
-requests>=2.23.0
-scipy>=1.4.1
-torch>=1.7.0,!=1.12.0
-torchvision>=0.8.1,!=0.13.0
-tqdm>=4.41.0
-protobuf<4.21.3
-
-# Logging -------------------------------------
-tensorboard>=2.4.1
-# wandb
-
-# Plotting ------------------------------------
-pandas>=1.1.4
-seaborn>=0.11.0
-
-# Export --------------------------------------
-# coremltools>=4.1 # CoreML export
-# onnx>=1.9.0 # ONNX export
-# onnx-simplifier>=0.3.6 # ONNX simplifier
-# scikit-learn==0.19.2 # CoreML quantization
-# tensorflow>=2.4.1 # TFLite export
-# tensorflowjs>=3.9.0 # TF.js export
-# openvino-dev # OpenVINO export
-
-# Extras --------------------------------------
-ipython # interactive notebook
-psutil # system utilization
-thop # FLOPs computation
-# albumentations>=1.0.3
-# pycocotools>=2.0 # COCO mAP
-# roboflow
diff --git a/cv/detection/yolov7/pytorch/test.py b/cv/detection/yolov7/pytorch/test.py
deleted file mode 100644
index 17b48060bebca76ba19b5f456da16fcff9324824..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/test.py
+++ /dev/null
@@ -1,353 +0,0 @@
-import argparse
-import json
-import os
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-from models.experimental import attempt_load
-from utils.datasets import create_dataloader
-from utils.general import coco80_to_coco91_class, check_dataset, check_file, check_img_size, check_requirements, \
- box_iou, non_max_suppression, scale_coords, xyxy2xywh, xywh2xyxy, set_logging, increment_path, colorstr
-from utils.metrics import ap_per_class, ConfusionMatrix
-from utils.plots import plot_images, output_to_target, plot_study_txt
-from utils.torch_utils import select_device, time_synchronized, TracedModel
-
-
-def test(data,
- weights=None,
- batch_size=32,
- imgsz=640,
- conf_thres=0.001,
- iou_thres=0.6, # for NMS
- save_json=False,
- single_cls=False,
- augment=False,
- verbose=False,
- model=None,
- dataloader=None,
- save_dir=Path(''), # for saving images
- save_txt=False, # for auto-labelling
- save_hybrid=False, # for hybrid auto-labelling
- save_conf=False, # save auto-label confidences
- plots=True,
- wandb_logger=None,
- compute_loss=None,
- half_precision=True,
- trace=False,
- is_coco=False,
- v5_metric=False):
- # Initialize/load model and set device
- training = model is not None
- if training: # called by train.py
- device = next(model.parameters()).device # get model device
-
- else: # called directly
- set_logging()
- device = select_device(opt.device, batch_size=batch_size)
-
- # Directories
- save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Load model
- model = attempt_load(weights, map_location=device) # load FP32 model
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- imgsz = check_img_size(imgsz, s=gs) # check img_size
-
- if trace:
- model = TracedModel(model, device, imgsz)
-
- # Half
- half = device.type != 'cpu' and half_precision # half precision only supported on CUDA
- if half:
- model.half()
-
- # Configure
- model.eval()
- if isinstance(data, str):
- is_coco = data.endswith('coco.yaml')
- with open(data) as f:
- data = yaml.load(f, Loader=yaml.SafeLoader)
- check_dataset(data) # check
- nc = 1 if single_cls else int(data['nc']) # number of classes
- iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95
- niou = iouv.numel()
-
- # Logging
- log_imgs = 0
- if wandb_logger and wandb_logger.wandb:
- log_imgs = min(wandb_logger.log_imgs, 100)
- # Dataloader
- if not training:
- if device.type != 'cpu':
- model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
- task = opt.task if opt.task in ('train', 'val', 'test') else 'val' # path to train/val/test images
- dataloader = create_dataloader(data[task], imgsz, batch_size, gs, opt, pad=0.5, rect=True,
- prefix=colorstr(f'{task}: '))[0]
-
- if v5_metric:
- print("Testing with YOLOv5 AP metric...")
-
- seen = 0
- confusion_matrix = ConfusionMatrix(nc=nc)
- names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)}
- coco91class = coco80_to_coco91_class()
- s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
- p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0.
- loss = torch.zeros(3, device=device)
- jdict, stats, ap, ap_class, wandb_images = [], [], [], [], []
- for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)):
- img = img.to(device, non_blocking=True)
- img = img.half() if half else img.float() # uint8 to fp16/32
- img /= 255.0 # 0 - 255 to 0.0 - 1.0
- targets = targets.to(device)
- nb, _, height, width = img.shape # batch size, channels, height, width
-
- with torch.no_grad():
- # Run model
- t = time_synchronized()
- out, train_out = model(img, augment=augment) # inference and training outputs
- t0 += time_synchronized() - t
-
- # Compute loss
- if compute_loss:
- loss += compute_loss([x.float() for x in train_out], targets)[1][:3] # box, obj, cls
-
- # Run NMS
- targets[:, 2:] *= torch.Tensor([width, height, width, height]).to(device) # to pixels
- lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling
- t = time_synchronized()
- out = non_max_suppression(out, conf_thres=conf_thres, iou_thres=iou_thres, labels=lb, multi_label=True)
- t1 += time_synchronized() - t
-
- # Statistics per image
- for si, pred in enumerate(out):
- labels = targets[targets[:, 0] == si, 1:]
- nl = len(labels)
- tcls = labels[:, 0].tolist() if nl else [] # target class
- path = Path(paths[si])
- seen += 1
-
- if len(pred) == 0:
- if nl:
- stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
- continue
-
- # Predictions
- predn = pred.clone()
- scale_coords(img[si].shape[1:], predn[:, :4], shapes[si][0], shapes[si][1]) # native-space pred
-
- # Append to text file
- if save_txt:
- gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]] # normalization gain whwh
- for *xyxy, conf, cls in predn.tolist():
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
- with open(save_dir / 'labels' / (path.stem + '.txt'), 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- # W&B logging - Media Panel Plots
- if len(wandb_images) < log_imgs and wandb_logger.current_epoch > 0: # Check for test operation
- if wandb_logger.current_epoch % wandb_logger.bbox_interval == 0:
- box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": "%s %.3f" % (names[cls], conf),
- "scores": {"class_score": conf},
- "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- wandb_images.append(wandb_logger.wandb.Image(img[si], boxes=boxes, caption=path.name))
- wandb_logger.log_training_progress(predn, path, names) if wandb_logger and wandb_logger.wandb_run else None
-
- # Append to pycocotools JSON dictionary
- if save_json:
- # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ...
- image_id = int(path.stem) if path.stem.isnumeric() else path.stem
- box = xyxy2xywh(predn[:, :4]) # xywh
- box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
- for p, b in zip(pred.tolist(), box.tolist()):
- jdict.append({'image_id': image_id,
- 'category_id': coco91class[int(p[5])] if is_coco else int(p[5]),
- 'bbox': [round(x, 3) for x in b],
- 'score': round(p[4], 5)})
-
- # Assign all predictions as incorrect
- correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device)
- if nl:
- detected = [] # target indices
- tcls_tensor = labels[:, 0]
-
- # target boxes
- tbox = xywh2xyxy(labels[:, 1:5])
- scale_coords(img[si].shape[1:], tbox, shapes[si][0], shapes[si][1]) # native-space labels
- if plots:
- confusion_matrix.process_batch(predn, torch.cat((labels[:, 0:1], tbox), 1))
-
- # Per target class
- for cls in torch.unique(tcls_tensor):
- ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1) # prediction indices
- pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1) # target indices
-
- # Search for detections
- if pi.shape[0]:
- # Prediction to target ious
- ious, i = box_iou(predn[pi, :4], tbox[ti]).max(1) # best ious, indices
-
- # Append detections
- detected_set = set()
- for j in (ious > iouv[0]).nonzero(as_tuple=False):
- d = ti[i[j]] # detected target
- if d.item() not in detected_set:
- detected_set.add(d.item())
- detected.append(d)
- correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn
- if len(detected) == nl: # all targets already located in image
- break
-
- # Append statistics (correct, conf, pcls, tcls)
- stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))
-
- # Plot images
- if plots and batch_i < 3:
- f = save_dir / f'test_batch{batch_i}_labels.jpg' # labels
- Thread(target=plot_images, args=(img, targets, paths, f, names), daemon=True).start()
- f = save_dir / f'test_batch{batch_i}_pred.jpg' # predictions
- Thread(target=plot_images, args=(img, output_to_target(out), paths, f, names), daemon=True).start()
-
- # Compute statistics
- stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
- if len(stats) and stats[0].any():
- p, r, ap, f1, ap_class = ap_per_class(*stats, plot=plots, v5_metric=v5_metric, save_dir=save_dir, names=names)
- ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
- mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
- nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class
- else:
- nt = torch.zeros(1)
-
- # Print results
- pf = '%20s' + '%12i' * 2 + '%12.3g' * 4 # print format
- print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))
-
- # Print results per class
- if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats):
- for i, c in enumerate(ap_class):
- print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))
-
- # Print speeds
- t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size) # tuple
- if not training:
- print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t)
-
- # Plots
- if plots:
- confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
- if wandb_logger and wandb_logger.wandb:
- val_batches = [wandb_logger.wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('test*.jpg'))]
- wandb_logger.log({"Validation": val_batches})
- if wandb_images:
- wandb_logger.log({"Bounding Box Debugger/Images": wandb_images})
-
- # Save JSON
- if save_json and len(jdict):
- w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights
- anno_json = './coco/annotations/instances_val2017.json' # annotations json
- pred_json = str(save_dir / f"{w}_predictions.json") # predictions json
- print('\nEvaluating pycocotools mAP... saving %s...' % pred_json)
- with open(pred_json, 'w') as f:
- json.dump(jdict, f)
-
- try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
- from pycocotools.coco import COCO
- from pycocotools.cocoeval import COCOeval
-
- anno = COCO(anno_json) # init annotations api
- pred = anno.loadRes(pred_json) # init predictions api
- eval = COCOeval(anno, pred, 'bbox')
- if is_coco:
- eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] # image IDs to evaluate
- eval.evaluate()
- eval.accumulate()
- eval.summarize()
- map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5)
- except Exception as e:
- print(f'pycocotools unable to run: {e}')
-
- # Return results
- model.float() # for training
- if not training:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- print(f"Results saved to {save_dir}{s}")
- maps = np.zeros(nc) + map
- for i, c in enumerate(ap_class):
- maps[c] = ap[i]
- return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(prog='test.py')
- parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)')
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='*.data path')
- parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch')
- parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
- parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS')
- parser.add_argument('--task', default='val', help='train, val, test, speed or study')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--verbose', action='store_true', help='report mAP by class')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
- parser.add_argument('--project', default='runs/test', help='save to project/name')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--no-trace', action='store_true', help='don`t trace model')
- parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation')
- opt = parser.parse_args()
- opt.save_json |= opt.data.endswith('coco.yaml')
- opt.data = check_file(opt.data) # check file
- print(opt)
- #check_requirements()
-
- if opt.task in ('train', 'val', 'test'): # run normally
- test(opt.data,
- opt.weights,
- opt.batch_size,
- opt.img_size,
- opt.conf_thres,
- opt.iou_thres,
- opt.save_json,
- opt.single_cls,
- opt.augment,
- opt.verbose,
- save_txt=opt.save_txt | opt.save_hybrid,
- save_hybrid=opt.save_hybrid,
- save_conf=opt.save_conf,
- trace=not opt.no_trace,
- v5_metric=opt.v5_metric
- )
-
- elif opt.task == 'speed': # speed benchmarks
- for w in opt.weights:
- test(opt.data, w, opt.batch_size, opt.img_size, 0.25, 0.45, save_json=False, plots=False, v5_metric=opt.v5_metric)
-
- elif opt.task == 'study': # run over a range of settings and save/plot
- # python test.py --task study --data coco.yaml --iou 0.65 --weights yolov7.pt
- x = list(range(256, 1536 + 128, 128)) # x axis (image sizes)
- for w in opt.weights:
- f = f'study_{Path(opt.data).stem}_{Path(w).stem}.txt' # filename to save to
- y = [] # y axis
- for i in x: # img-size
- print(f'\nRunning {f} point {i}...')
- r, _, t = test(opt.data, w, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json,
- plots=False, v5_metric=opt.v5_metric)
- y.append(r + t) # results and times
- np.savetxt(f, y, fmt='%10.4g') # save
- os.system('zip -r study.zip study_*.txt')
- plot_study_txt(x=x) # plot
diff --git a/cv/detection/yolov7/pytorch/train.py b/cv/detection/yolov7/pytorch/train.py
deleted file mode 100644
index f1dfd4d9189330dce83479cb2996e3c7f4da9117..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/train.py
+++ /dev/null
@@ -1,708 +0,0 @@
-# Copyright (c) 2024, Shanghai Iluvatar CoreX Semiconductor Co., Ltd.
-# All Rights Reserved.
-
-import argparse
-import logging
-import math
-import os
-import random
-import time
-from copy import deepcopy
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-import torch.optim.lr_scheduler as lr_scheduler
-import torch.utils.data
-import yaml
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.tensorboard import SummaryWriter
-from tqdm import tqdm
-
-import test # import test.py to get mAP after each epoch
-from models.experimental import attempt_load
-from models.yolo import Model
-from utils.autoanchor import check_anchors
-from utils.datasets import create_dataloader
-from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \
- fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \
- check_requirements, print_mutation, set_logging, one_cycle, colorstr
-from utils.google_utils import attempt_download
-from utils.loss import ComputeLoss, ComputeLossOTA
-from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
-from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel
-from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume
-
-logger = logging.getLogger(__name__)
-
-
-def train(hyp, opt, device, tb_writer=None):
- logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
- save_dir, epochs, batch_size, total_batch_size, weights, rank, freeze = \
- Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank, opt.freeze
-
- # Directories
- wdir = save_dir / 'weights'
- wdir.mkdir(parents=True, exist_ok=True) # make dir
- last = wdir / 'last.pt'
- best = wdir / 'best.pt'
- results_file = save_dir / 'results.txt'
-
- # Save run settings
- with open(save_dir / 'hyp.yaml', 'w') as f:
- yaml.dump(hyp, f, sort_keys=False)
- with open(save_dir / 'opt.yaml', 'w') as f:
- yaml.dump(vars(opt), f, sort_keys=False)
-
- # Configure
- plots = not opt.evolve # create plots
- cuda = device.type != 'cpu'
- init_seeds(2 + rank)
- with open(opt.data) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- is_coco = opt.data.endswith('coco.yaml')
-
- # Logging- Doing this before checking the dataset. Might update data_dict
- loggers = {'wandb': None} # loggers dict
- if rank in [-1, 0]:
- opt.hyp = hyp # add hyperparameters
- run_id = torch.load(weights, map_location=device).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
- wandb_logger = WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict)
- loggers['wandb'] = wandb_logger.wandb
- data_dict = wandb_logger.data_dict
- if wandb_logger.wandb:
- weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resuming
-
- nc = 1 if opt.single_cls else int(data_dict['nc']) # number of classes
- names = ['item'] if opt.single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
- assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check
-
- # Model
- pretrained = weights.endswith('.pt')
- if pretrained:
- with torch_distributed_zero_first(rank):
- attempt_download(weights) # download if not found locally
- ckpt = torch.load(weights, map_location=device) # load checkpoint
- model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- exclude = ['anchor'] if (opt.cfg or hyp.get('anchors')) and not opt.resume else [] # exclude keys
- state_dict = ckpt['model'].float().state_dict() # to FP32
- state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
- model.load_state_dict(state_dict, strict=False) # load
- logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
- else:
- model = Model(opt.cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- with torch_distributed_zero_first(rank):
- check_dataset(data_dict) # check
- train_path = data_dict['train']
- test_path = data_dict['val']
-
- # Freeze
- freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))] # parameter names to freeze (full or partial)
- for k, v in model.named_parameters():
- v.requires_grad = True # train all layers
- if any(x in k for x in freeze):
- print('freezing %s' % k)
- v.requires_grad = False
-
- # Optimizer
- nbs = 64 # nominal batch size
- accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing
- hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay
- logger.info(f"Scaled weight_decay = {hyp['weight_decay']}")
-
- pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
- for k, v in model.named_modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- pg2.append(v.bias) # biases
- if isinstance(v, nn.BatchNorm2d):
- pg0.append(v.weight) # no decay
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- pg1.append(v.weight) # apply decay
- if hasattr(v, 'im'):
- if hasattr(v.im, 'implicit'):
- pg0.append(v.im.implicit)
- else:
- for iv in v.im:
- pg0.append(iv.implicit)
- if hasattr(v, 'imc'):
- if hasattr(v.imc, 'implicit'):
- pg0.append(v.imc.implicit)
- else:
- for iv in v.imc:
- pg0.append(iv.implicit)
- if hasattr(v, 'imb'):
- if hasattr(v.imb, 'implicit'):
- pg0.append(v.imb.implicit)
- else:
- for iv in v.imb:
- pg0.append(iv.implicit)
- if hasattr(v, 'imo'):
- if hasattr(v.imo, 'implicit'):
- pg0.append(v.imo.implicit)
- else:
- for iv in v.imo:
- pg0.append(iv.implicit)
- if hasattr(v, 'ia'):
- if hasattr(v.ia, 'implicit'):
- pg0.append(v.ia.implicit)
- else:
- for iv in v.ia:
- pg0.append(iv.implicit)
- if hasattr(v, 'attn'):
- if hasattr(v.attn, 'logit_scale'):
- pg0.append(v.attn.logit_scale)
- if hasattr(v.attn, 'q_bias'):
- pg0.append(v.attn.q_bias)
- if hasattr(v.attn, 'v_bias'):
- pg0.append(v.attn.v_bias)
- if hasattr(v.attn, 'relative_position_bias_table'):
- pg0.append(v.attn.relative_position_bias_table)
- if hasattr(v, 'rbr_dense'):
- if hasattr(v.rbr_dense, 'weight_rbr_origin'):
- pg0.append(v.rbr_dense.weight_rbr_origin)
- if hasattr(v.rbr_dense, 'weight_rbr_avg_conv'):
- pg0.append(v.rbr_dense.weight_rbr_avg_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_pfir_conv'):
- pg0.append(v.rbr_dense.weight_rbr_pfir_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_idconv1'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_idconv1)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_conv2'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_conv2)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_dw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_dw)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_pw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_pw)
- if hasattr(v.rbr_dense, 'vector'):
- pg0.append(v.rbr_dense.vector)
-
- if opt.adam:
- optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
- else:
- optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
-
- optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
- optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
- logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
- del pg0, pg1, pg2
-
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
- if opt.linear_lr:
- lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
- else:
- lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
- # EMA
- ema = ModelEMA(model) if rank in [-1, 0] else None
-
- # Resume
- start_epoch, best_fitness = 0, 0.0
- if pretrained:
- # Optimizer
- if ckpt['optimizer'] is not None:
- optimizer.load_state_dict(ckpt['optimizer'])
- best_fitness = ckpt['best_fitness']
-
- # EMA
- if ema and ckpt.get('ema'):
- ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
- ema.updates = ckpt['updates']
-
- # Results
- if ckpt.get('training_results') is not None:
- results_file.write_text(ckpt['training_results']) # write results.txt
-
- # Epochs
- start_epoch = ckpt['epoch'] + 1
- if opt.resume:
- assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
- if epochs < start_epoch:
- logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
- (weights, ckpt['epoch'], epochs))
- epochs += ckpt['epoch'] # finetune additional epochs
-
- del ckpt, state_dict
-
- # Image sizes
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
- imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
-
- # DP mode
- if cuda and rank == -1 and torch.cuda.device_count() > 1:
- model = torch.nn.DataParallel(model)
-
- # SyncBatchNorm
- if opt.sync_bn and cuda and rank != -1:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
- logger.info('Using SyncBatchNorm()')
-
- # Trainloader
- dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
- hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
- world_size=opt.world_size, workers=opt.workers,
- image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
- mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
- nb = len(dataloader) # number of batches
- assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
-
- # Process 0
- if rank in [-1, 0]:
- testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader
- hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1,
- world_size=opt.world_size, workers=opt.workers,
- pad=0.5, prefix=colorstr('val: '))[0]
-
- if not opt.resume:
- labels = np.concatenate(dataset.labels, 0)
- c = torch.tensor(labels[:, 0]) # classes
- # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
- # model._initialize_biases(cf.to(device))
- if plots:
- #plot_labels(labels, names, save_dir, loggers)
- if tb_writer:
- tb_writer.add_histogram('classes', c, 0)
-
- # Anchors
- if not opt.noautoanchor:
- check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
- model.half().float() # pre-reduce anchor precision
-
- # DDP mode
- if cuda and rank != -1:
- model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank,
- # nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698
- find_unused_parameters=any(isinstance(layer, nn.MultiheadAttention) for layer in model.modules()))
-
- # Model parameters
- hyp['box'] *= 3. / nl # scale to layers
- hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers
- hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers
- hyp['label_smoothing'] = opt.label_smoothing
- model.nc = nc # attach number of classes to model
- model.hyp = hyp # attach hyperparameters to model
- model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou)
- model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
- model.names = names
-
- # Start training
- t0 = time.time()
- nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
- # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
- maps = np.zeros(nc) # mAP per class
- results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- scheduler.last_epoch = start_epoch - 1 # do not move
- scaler = amp.GradScaler(enabled=cuda)
- compute_loss_ota = ComputeLossOTA(model) # init loss class
- compute_loss = ComputeLoss(model) # init loss class
- logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n'
- f'Using {dataloader.num_workers} dataloader workers\n'
- f'Logging results to {save_dir}\n'
- f'Starting training for {epochs} epochs...')
- torch.save(model, wdir / 'init.pt')
- for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
- model.train()
-
- # Update image weights (optional)
- if opt.image_weights:
- # Generate indices
- if rank in [-1, 0]:
- cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
- iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
- dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
- # Broadcast if DDP
- if rank != -1:
- indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int()
- dist.broadcast(indices, 0)
- if rank != 0:
- dataset.indices = indices.cpu().numpy()
-
- # Update mosaic border
- # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
- # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
-
- mloss = torch.zeros(4, device=device) # mean losses
- if rank != -1:
- dataloader.sampler.set_epoch(epoch)
- pbar = enumerate(dataloader)
- logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'labels', 'img_size'))
- if rank in [-1, 0]:
- pbar = tqdm(pbar, total=nb) # progress bar
- optimizer.zero_grad()
- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
- ni = i + nb * epoch # number integrated batches (since train start)
- imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
-
- # Warmup
- if ni <= nw:
- xi = [0, nw] # x interp
- # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
- accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
- for j, x in enumerate(optimizer.param_groups):
- # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
- x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
- if 'momentum' in x:
- x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
-
- # Multi-scale
- if opt.multi_scale:
- sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
- sf = sz / max(imgs.shape[2:]) # scale factor
- if sf != 1:
- ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
- imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
-
- # Forward
- with amp.autocast(enabled=cuda):
- pred = model(imgs) # forward
- if 'loss_ota' not in hyp or hyp['loss_ota'] == 1:
- loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
- else:
- loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
- if rank != -1:
- loss *= opt.world_size # gradient averaged between devices in DDP mode
- if opt.quad:
- loss *= 4.
-
- # Backward
- scaler.scale(loss).backward()
-
- # Optimize
- if ni % accumulate == 0:
- scaler.step(optimizer) # optimizer.step
- scaler.update()
- optimizer.zero_grad()
- if ema:
- ema.update(model)
-
- # Print
- if rank in [-1, 0]:
- mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
- mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
- s = ('%10s' * 2 + '%10.4g' * 6) % (
- '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
- pbar.set_description(s)
-
- # Plot
- if plots and ni < 10:
- f = save_dir / f'train_batch{ni}.jpg' # filename
- Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()
- # if tb_writer:
- # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
- # tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph
- elif plots and ni == 10 and wandb_logger.wandb:
- wandb_logger.log({"Mosaics": [wandb_logger.wandb.Image(str(x), caption=x.name) for x in
- save_dir.glob('train*.jpg') if x.exists()]})
-
- # end batch ------------------------------------------------------------------------------------------------
- # end epoch ----------------------------------------------------------------------------------------------------
-
- # Scheduler
- lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard
- scheduler.step()
-
- # DDP process 0 or single-GPU
- if rank in [-1, 0]:
- # mAP
- ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
- final_epoch = epoch + 1 == epochs
- if not opt.notest or final_epoch: # Calculate mAP
- wandb_logger.current_epoch = epoch + 1
- results, maps, times = test.test(data_dict,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- model=ema.ema,
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- verbose=nc < 50 and final_epoch,
- plots=plots and final_epoch,
- wandb_logger=wandb_logger,
- compute_loss=compute_loss,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Write
- with open(results_file, 'a') as f:
- f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss
- if len(opt.name) and opt.bucket:
- os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))
-
- # Log
- tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss
- 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
- 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss
- 'x/lr0', 'x/lr1', 'x/lr2'] # params
- for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
- if tb_writer:
- tb_writer.add_scalar(tag, x, epoch) # tensorboard
- if wandb_logger.wandb:
- wandb_logger.log({tag: x}) # W&B
-
- # Update best mAP
- fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
- if fi > best_fitness:
- best_fitness = fi
- wandb_logger.end_epoch(best_result=best_fitness == fi)
-
- # Save model
- if (not opt.nosave) or (final_epoch and not opt.evolve): # if save
- ckpt = {'epoch': epoch,
- 'best_fitness': best_fitness,
- 'training_results': results_file.read_text(),
- 'model': deepcopy(model.module if is_parallel(model) else model).half(),
- 'ema': deepcopy(ema.ema).half(),
- 'updates': ema.updates,
- 'optimizer': optimizer.state_dict(),
- 'wandb_id': wandb_logger.wandb_run.id if wandb_logger.wandb else None}
-
- # Save last, best and delete
- torch.save(ckpt, last)
- if best_fitness == fi:
- torch.save(ckpt, best)
- if (best_fitness == fi) and (epoch >= 200):
- torch.save(ckpt, wdir / 'best_{:03d}.pt'.format(epoch))
- if epoch == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif ((epoch+1) % 25) == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif epoch >= (epochs-5):
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- if wandb_logger.wandb:
- if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1:
- wandb_logger.log_model(
- last.parent, opt, epoch, fi, best_model=best_fitness == fi)
- del ckpt
-
- # end epoch ----------------------------------------------------------------------------------------------------
- # end training
- if rank in [-1, 0]:
- # Plots
- if plots:
- plot_results(save_dir=save_dir) # save as results.png
- if wandb_logger.wandb:
- files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]]
- wandb_logger.log({"Results": [wandb_logger.wandb.Image(str(save_dir / f), caption=f) for f in files
- if (save_dir / f).exists()]})
- # Test best.pt
- logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
- if opt.data.endswith('coco.yaml') and nc == 80: # if COCO
- for m in (last, best) if best.exists() else (last): # speed, mAP tests
- results, _, _ = test.test(opt.data,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- conf_thres=0.001,
- iou_thres=0.7,
- model=attempt_load(m, device).half(),
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- save_json=True,
- plots=False,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Strip optimizers
- final = best if best.exists() else last # final model
- for f in last, best:
- if f.exists():
- strip_optimizer(f) # strip optimizers
- if opt.bucket:
- os.system(f'gsutil cp {final} gs://{opt.bucket}/weights') # upload
- if wandb_logger.wandb and not opt.evolve: # Log the stripped model
- wandb_logger.wandb.log_artifact(str(final), type='model',
- name='run_' + wandb_logger.wandb_run.id + '_model',
- aliases=['last', 'best', 'stripped'])
- wandb_logger.finish_run()
- else:
- dist.destroy_process_group()
- torch.cuda.empty_cache()
- return results
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='yolo7.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
- parser.add_argument('--hyp', type=str, default='data/hyp.scratch.p5.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300)
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--notest', action='store_true', help='only test final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
- parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--local_rank', '--local-rank', type=int, default=-1, help='DDP parameter, do not modify')
- parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
- parser.add_argument('--project', default='runs/train', help='save to project/name')
- parser.add_argument('--entity', default=None, help='W&B entity')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--linear-lr', action='store_true', help='linear LR')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
- parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
- parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
- parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone of yolov7=50, first3=0 1 2')
- parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation')
- opt = parser.parse_args()
-
- # Set DDP variables
- opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
- opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1
- set_logging(opt.global_rank)
- #if opt.global_rank in [-1, 0]:
- # check_git_status()
- # check_requirements()
-
- # Resume
- wandb_run = check_wandb_resume(opt)
- if opt.resume and not wandb_run: # resume an interrupted run
- ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
- assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
- apriori = opt.global_rank, opt.local_rank
- with open(Path(ckpt).parent.parent / 'opt.yaml') as f:
- opt = argparse.Namespace(**yaml.load(f, Loader=yaml.SafeLoader)) # replace
- opt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank = '', ckpt, True, opt.total_batch_size, *apriori # reinstate
- logger.info('Resuming training from %s' % ckpt)
- else:
- # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
- opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
- assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
- opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
- opt.name = 'evolve' if opt.evolve else opt.name
- opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok | opt.evolve) # increment run
-
- # DDP mode
- opt.total_batch_size = opt.batch_size
- device = select_device(opt.device, batch_size=opt.batch_size)
- if opt.local_rank != -1:
- assert torch.cuda.device_count() > opt.local_rank
- torch.cuda.set_device(opt.local_rank)
- device = torch.device('cuda', opt.local_rank)
- dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
- assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
- opt.batch_size = opt.total_batch_size // opt.world_size
-
- # Hyperparameters
- with open(opt.hyp) as f:
- hyp = yaml.load(f, Loader=yaml.SafeLoader) # load hyps
-
- # Train
- logger.info(opt)
- if not opt.evolve:
- tb_writer = None # init loggers
- if opt.global_rank in [-1, 0]:
- prefix = colorstr('tensorboard: ')
- logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/")
- tb_writer = SummaryWriter(opt.save_dir) # Tensorboard
- train(hyp, opt, device, tb_writer)
-
- # Evolve hyperparameters (optional)
- else:
- # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
- meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
- 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
- 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
- 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
- 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
- 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
- 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
- 'box': (1, 0.02, 0.2), # box loss gain
- 'cls': (1, 0.2, 4.0), # cls loss gain
- 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
- 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
- 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
- 'iou_t': (0, 0.1, 0.7), # IoU training threshold
- 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
- 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
- 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
- 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
- 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
- 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
- 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
- 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
- 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
- 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
- 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
- 'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
- 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
- 'mosaic': (1, 0.0, 1.0), # image mixup (probability)
- 'mixup': (1, 0.0, 1.0), # image mixup (probability)
- 'copy_paste': (1, 0.0, 1.0), # segment copy-paste (probability)
- 'paste_in': (1, 0.0, 1.0)} # segment copy-paste (probability)
-
- with open(opt.hyp, errors='ignore') as f:
- hyp = yaml.safe_load(f) # load hyps dict
- if 'anchors' not in hyp: # anchors commented in hyp.yaml
- hyp['anchors'] = 3
-
- assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
- opt.notest, opt.nosave = True, True # only test/save final epoch
- # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
- yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
- if opt.bucket:
- os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
-
- for _ in range(300): # generations to evolve
- if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate
- # Select parent(s)
- parent = 'single' # parent selection method: 'single' or 'weighted'
- x = np.loadtxt('evolve.txt', ndmin=2)
- n = min(5, len(x)) # number of previous results to consider
- x = x[np.argsort(-fitness(x))][:n] # top n mutations
- w = fitness(x) - fitness(x).min() # weights
- if parent == 'single' or len(x) == 1:
- # x = x[random.randint(0, n - 1)] # random selection
- x = x[random.choices(range(n), weights=w)[0]] # weighted selection
- elif parent == 'weighted':
- x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
-
- # Mutate
- mp, s = 0.8, 0.2 # mutation probability, sigma
- npr = np.random
- npr.seed(int(time.time()))
- g = np.array([x[0] for x in meta.values()]) # gains 0-1
- ng = len(meta)
- v = np.ones(ng)
- while all(v == 1): # mutate until a change occurs (prevent duplicates)
- v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
- for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
- hyp[k] = float(x[i + 7] * v[i]) # mutate
-
- # Constrain to limits
- for k, v in meta.items():
- hyp[k] = max(hyp[k], v[1]) # lower limit
- hyp[k] = min(hyp[k], v[2]) # upper limit
- hyp[k] = round(hyp[k], 5) # significant digits
-
- # Train mutation
- results = train(hyp.copy(), opt, device)
-
- # Write mutation results
- print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
-
- # Plot results
- plot_evolution(yaml_file)
- print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n'
- f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}')
diff --git a/cv/detection/yolov7/pytorch/train_aux.py b/cv/detection/yolov7/pytorch/train_aux.py
deleted file mode 100644
index 0e8053f8503ba762843f6dd56219f1e6c4e74ccc..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/train_aux.py
+++ /dev/null
@@ -1,699 +0,0 @@
-import argparse
-import logging
-import math
-import os
-import random
-import time
-from copy import deepcopy
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-import torch.optim.lr_scheduler as lr_scheduler
-import torch.utils.data
-import yaml
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.tensorboard import SummaryWriter
-from tqdm import tqdm
-
-import test # import test.py to get mAP after each epoch
-from models.experimental import attempt_load
-from models.yolo import Model
-from utils.autoanchor import check_anchors
-from utils.datasets import create_dataloader
-from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \
- fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \
- check_requirements, print_mutation, set_logging, one_cycle, colorstr
-from utils.google_utils import attempt_download
-from utils.loss import ComputeLoss, ComputeLossAuxOTA
-from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
-from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel
-from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume
-
-logger = logging.getLogger(__name__)
-
-
-def train(hyp, opt, device, tb_writer=None):
- logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
- save_dir, epochs, batch_size, total_batch_size, weights, rank = \
- Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank
-
- # Directories
- wdir = save_dir / 'weights'
- wdir.mkdir(parents=True, exist_ok=True) # make dir
- last = wdir / 'last.pt'
- best = wdir / 'best.pt'
- results_file = save_dir / 'results.txt'
-
- # Save run settings
- with open(save_dir / 'hyp.yaml', 'w') as f:
- yaml.dump(hyp, f, sort_keys=False)
- with open(save_dir / 'opt.yaml', 'w') as f:
- yaml.dump(vars(opt), f, sort_keys=False)
-
- # Configure
- plots = not opt.evolve # create plots
- cuda = device.type != 'cpu'
- init_seeds(2 + rank)
- with open(opt.data) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- is_coco = opt.data.endswith('coco.yaml')
-
- # Logging- Doing this before checking the dataset. Might update data_dict
- loggers = {'wandb': None} # loggers dict
- if rank in [-1, 0]:
- opt.hyp = hyp # add hyperparameters
- run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
- wandb_logger = WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict)
- loggers['wandb'] = wandb_logger.wandb
- data_dict = wandb_logger.data_dict
- if wandb_logger.wandb:
- weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resuming
-
- nc = 1 if opt.single_cls else int(data_dict['nc']) # number of classes
- names = ['item'] if opt.single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
- assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check
-
- # Model
- pretrained = weights.endswith('.pt')
- if pretrained:
- with torch_distributed_zero_first(rank):
- attempt_download(weights) # download if not found locally
- ckpt = torch.load(weights, map_location=device) # load checkpoint
- model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- exclude = ['anchor'] if (opt.cfg or hyp.get('anchors')) and not opt.resume else [] # exclude keys
- state_dict = ckpt['model'].float().state_dict() # to FP32
- state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
- model.load_state_dict(state_dict, strict=False) # load
- logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
- else:
- model = Model(opt.cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- with torch_distributed_zero_first(rank):
- check_dataset(data_dict) # check
- train_path = data_dict['train']
- test_path = data_dict['val']
-
- # Freeze
- freeze = [] # parameter names to freeze (full or partial)
- for k, v in model.named_parameters():
- v.requires_grad = True # train all layers
- if any(x in k for x in freeze):
- print('freezing %s' % k)
- v.requires_grad = False
-
- # Optimizer
- nbs = 64 # nominal batch size
- accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing
- hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay
- logger.info(f"Scaled weight_decay = {hyp['weight_decay']}")
-
- pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
- for k, v in model.named_modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- pg2.append(v.bias) # biases
- if isinstance(v, nn.BatchNorm2d):
- pg0.append(v.weight) # no decay
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- pg1.append(v.weight) # apply decay
- if hasattr(v, 'im'):
- if hasattr(v.im, 'implicit'):
- pg0.append(v.im.implicit)
- else:
- for iv in v.im:
- pg0.append(iv.implicit)
- if hasattr(v, 'imc'):
- if hasattr(v.imc, 'implicit'):
- pg0.append(v.imc.implicit)
- else:
- for iv in v.imc:
- pg0.append(iv.implicit)
- if hasattr(v, 'imb'):
- if hasattr(v.imb, 'implicit'):
- pg0.append(v.imb.implicit)
- else:
- for iv in v.imb:
- pg0.append(iv.implicit)
- if hasattr(v, 'imo'):
- if hasattr(v.imo, 'implicit'):
- pg0.append(v.imo.implicit)
- else:
- for iv in v.imo:
- pg0.append(iv.implicit)
- if hasattr(v, 'ia'):
- if hasattr(v.ia, 'implicit'):
- pg0.append(v.ia.implicit)
- else:
- for iv in v.ia:
- pg0.append(iv.implicit)
- if hasattr(v, 'attn'):
- if hasattr(v.attn, 'logit_scale'):
- pg0.append(v.attn.logit_scale)
- if hasattr(v.attn, 'q_bias'):
- pg0.append(v.attn.q_bias)
- if hasattr(v.attn, 'v_bias'):
- pg0.append(v.attn.v_bias)
- if hasattr(v.attn, 'relative_position_bias_table'):
- pg0.append(v.attn.relative_position_bias_table)
- if hasattr(v, 'rbr_dense'):
- if hasattr(v.rbr_dense, 'weight_rbr_origin'):
- pg0.append(v.rbr_dense.weight_rbr_origin)
- if hasattr(v.rbr_dense, 'weight_rbr_avg_conv'):
- pg0.append(v.rbr_dense.weight_rbr_avg_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_pfir_conv'):
- pg0.append(v.rbr_dense.weight_rbr_pfir_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_idconv1'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_idconv1)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_conv2'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_conv2)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_dw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_dw)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_pw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_pw)
- if hasattr(v.rbr_dense, 'vector'):
- pg0.append(v.rbr_dense.vector)
-
- if opt.adam:
- optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
- else:
- optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
-
- optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
- optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
- logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
- del pg0, pg1, pg2
-
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
- if opt.linear_lr:
- lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
- else:
- lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
- # EMA
- ema = ModelEMA(model) if rank in [-1, 0] else None
-
- # Resume
- start_epoch, best_fitness = 0, 0.0
- if pretrained:
- # Optimizer
- if ckpt['optimizer'] is not None:
- optimizer.load_state_dict(ckpt['optimizer'])
- best_fitness = ckpt['best_fitness']
-
- # EMA
- if ema and ckpt.get('ema'):
- ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
- ema.updates = ckpt['updates']
-
- # Results
- if ckpt.get('training_results') is not None:
- results_file.write_text(ckpt['training_results']) # write results.txt
-
- # Epochs
- start_epoch = ckpt['epoch'] + 1
- if opt.resume:
- assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
- if epochs < start_epoch:
- logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
- (weights, ckpt['epoch'], epochs))
- epochs += ckpt['epoch'] # finetune additional epochs
-
- del ckpt, state_dict
-
- # Image sizes
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
- imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
-
- # DP mode
- if cuda and rank == -1 and torch.cuda.device_count() > 1:
- model = torch.nn.DataParallel(model)
-
- # SyncBatchNorm
- if opt.sync_bn and cuda and rank != -1:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
- logger.info('Using SyncBatchNorm()')
-
- # Trainloader
- dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
- hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
- world_size=opt.world_size, workers=opt.workers,
- image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
- mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
- nb = len(dataloader) # number of batches
- assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
-
- # Process 0
- if rank in [-1, 0]:
- testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader
- hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1,
- world_size=opt.world_size, workers=opt.workers,
- pad=0.5, prefix=colorstr('val: '))[0]
-
- if not opt.resume:
- labels = np.concatenate(dataset.labels, 0)
- c = torch.tensor(labels[:, 0]) # classes
- # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
- # model._initialize_biases(cf.to(device))
- if plots:
- #plot_labels(labels, names, save_dir, loggers)
- if tb_writer:
- tb_writer.add_histogram('classes', c, 0)
-
- # Anchors
- if not opt.noautoanchor:
- check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
- model.half().float() # pre-reduce anchor precision
-
- # DDP mode
- if cuda and rank != -1:
- model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank,
- # nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698
- find_unused_parameters=any(isinstance(layer, nn.MultiheadAttention) for layer in model.modules()))
-
- # Model parameters
- hyp['box'] *= 3. / nl # scale to layers
- hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers
- hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers
- hyp['label_smoothing'] = opt.label_smoothing
- model.nc = nc # attach number of classes to model
- model.hyp = hyp # attach hyperparameters to model
- model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou)
- model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
- model.names = names
-
- # Start training
- t0 = time.time()
- nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
- # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
- maps = np.zeros(nc) # mAP per class
- results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- scheduler.last_epoch = start_epoch - 1 # do not move
- scaler = amp.GradScaler(enabled=cuda)
- compute_loss_ota = ComputeLossAuxOTA(model) # init loss class
- compute_loss = ComputeLoss(model) # init loss class
- logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n'
- f'Using {dataloader.num_workers} dataloader workers\n'
- f'Logging results to {save_dir}\n'
- f'Starting training for {epochs} epochs...')
- torch.save(model, wdir / 'init.pt')
- for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
- model.train()
-
- # Update image weights (optional)
- if opt.image_weights:
- # Generate indices
- if rank in [-1, 0]:
- cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
- iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
- dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
- # Broadcast if DDP
- if rank != -1:
- indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int()
- dist.broadcast(indices, 0)
- if rank != 0:
- dataset.indices = indices.cpu().numpy()
-
- # Update mosaic border
- # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
- # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
-
- mloss = torch.zeros(4, device=device) # mean losses
- if rank != -1:
- dataloader.sampler.set_epoch(epoch)
- pbar = enumerate(dataloader)
- logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'labels', 'img_size'))
- if rank in [-1, 0]:
- pbar = tqdm(pbar, total=nb) # progress bar
- optimizer.zero_grad()
- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
- ni = i + nb * epoch # number integrated batches (since train start)
- imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
-
- # Warmup
- if ni <= nw:
- xi = [0, nw] # x interp
- # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
- accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
- for j, x in enumerate(optimizer.param_groups):
- # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
- x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
- if 'momentum' in x:
- x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
-
- # Multi-scale
- if opt.multi_scale:
- sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
- sf = sz / max(imgs.shape[2:]) # scale factor
- if sf != 1:
- ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
- imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
-
- # Forward
- with amp.autocast(enabled=cuda):
- pred = model(imgs) # forward
- loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
- if rank != -1:
- loss *= opt.world_size # gradient averaged between devices in DDP mode
- if opt.quad:
- loss *= 4.
-
- # Backward
- scaler.scale(loss).backward()
-
- # Optimize
- if ni % accumulate == 0:
- scaler.step(optimizer) # optimizer.step
- scaler.update()
- optimizer.zero_grad()
- if ema:
- ema.update(model)
-
- # Print
- if rank in [-1, 0]:
- mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
- mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
- s = ('%10s' * 2 + '%10.4g' * 6) % (
- '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
- pbar.set_description(s)
-
- # Plot
- if plots and ni < 10:
- f = save_dir / f'train_batch{ni}.jpg' # filename
- Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()
- # if tb_writer:
- # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
- # tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph
- elif plots and ni == 10 and wandb_logger.wandb:
- wandb_logger.log({"Mosaics": [wandb_logger.wandb.Image(str(x), caption=x.name) for x in
- save_dir.glob('train*.jpg') if x.exists()]})
-
- # end batch ------------------------------------------------------------------------------------------------
- # end epoch ----------------------------------------------------------------------------------------------------
-
- # Scheduler
- lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard
- scheduler.step()
-
- # DDP process 0 or single-GPU
- if rank in [-1, 0]:
- # mAP
- ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
- final_epoch = epoch + 1 == epochs
- if not opt.notest or final_epoch: # Calculate mAP
- wandb_logger.current_epoch = epoch + 1
- results, maps, times = test.test(data_dict,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- model=ema.ema,
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- verbose=nc < 50 and final_epoch,
- plots=plots and final_epoch,
- wandb_logger=wandb_logger,
- compute_loss=compute_loss,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Write
- with open(results_file, 'a') as f:
- f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss
- if len(opt.name) and opt.bucket:
- os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))
-
- # Log
- tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss
- 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
- 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss
- 'x/lr0', 'x/lr1', 'x/lr2'] # params
- for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
- if tb_writer:
- tb_writer.add_scalar(tag, x, epoch) # tensorboard
- if wandb_logger.wandb:
- wandb_logger.log({tag: x}) # W&B
-
- # Update best mAP
- fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
- if fi > best_fitness:
- best_fitness = fi
- wandb_logger.end_epoch(best_result=best_fitness == fi)
-
- # Save model
- if (not opt.nosave) or (final_epoch and not opt.evolve): # if save
- ckpt = {'epoch': epoch,
- 'best_fitness': best_fitness,
- 'training_results': results_file.read_text(),
- 'model': deepcopy(model.module if is_parallel(model) else model).half(),
- 'ema': deepcopy(ema.ema).half(),
- 'updates': ema.updates,
- 'optimizer': optimizer.state_dict(),
- 'wandb_id': wandb_logger.wandb_run.id if wandb_logger.wandb else None}
-
- # Save last, best and delete
- torch.save(ckpt, last)
- if best_fitness == fi:
- torch.save(ckpt, best)
- if (best_fitness == fi) and (epoch >= 200):
- torch.save(ckpt, wdir / 'best_{:03d}.pt'.format(epoch))
- if epoch == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif ((epoch+1) % 25) == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif epoch >= (epochs-5):
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- if wandb_logger.wandb:
- if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1:
- wandb_logger.log_model(
- last.parent, opt, epoch, fi, best_model=best_fitness == fi)
- del ckpt
-
- # end epoch ----------------------------------------------------------------------------------------------------
- # end training
- if rank in [-1, 0]:
- # Plots
- if plots:
- plot_results(save_dir=save_dir) # save as results.png
- if wandb_logger.wandb:
- files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]]
- wandb_logger.log({"Results": [wandb_logger.wandb.Image(str(save_dir / f), caption=f) for f in files
- if (save_dir / f).exists()]})
- # Test best.pt
- logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
- if opt.data.endswith('coco.yaml') and nc == 80: # if COCO
- for m in (last, best) if best.exists() else (last): # speed, mAP tests
- results, _, _ = test.test(opt.data,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- conf_thres=0.001,
- iou_thres=0.7,
- model=attempt_load(m, device).half(),
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- save_json=True,
- plots=False,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Strip optimizers
- final = best if best.exists() else last # final model
- for f in last, best:
- if f.exists():
- strip_optimizer(f) # strip optimizers
- if opt.bucket:
- os.system(f'gsutil cp {final} gs://{opt.bucket}/weights') # upload
- if wandb_logger.wandb and not opt.evolve: # Log the stripped model
- wandb_logger.wandb.log_artifact(str(final), type='model',
- name='run_' + wandb_logger.wandb_run.id + '_model',
- aliases=['last', 'best', 'stripped'])
- wandb_logger.finish_run()
- else:
- dist.destroy_process_group()
- torch.cuda.empty_cache()
- return results
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='yolo7.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
- parser.add_argument('--hyp', type=str, default='data/hyp.scratch.p5.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300)
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--notest', action='store_true', help='only test final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
- parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
- parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
- parser.add_argument('--project', default='runs/train', help='save to project/name')
- parser.add_argument('--entity', default=None, help='W&B entity')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--linear-lr', action='store_true', help='linear LR')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
- parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
- parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
- parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation')
- opt = parser.parse_args()
-
- # Set DDP variables
- opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
- opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1
- set_logging(opt.global_rank)
- #if opt.global_rank in [-1, 0]:
- # check_git_status()
- # check_requirements()
-
- # Resume
- wandb_run = check_wandb_resume(opt)
- if opt.resume and not wandb_run: # resume an interrupted run
- ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
- assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
- apriori = opt.global_rank, opt.local_rank
- with open(Path(ckpt).parent.parent / 'opt.yaml') as f:
- opt = argparse.Namespace(**yaml.load(f, Loader=yaml.SafeLoader)) # replace
- opt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank = '', ckpt, True, opt.total_batch_size, *apriori # reinstate
- logger.info('Resuming training from %s' % ckpt)
- else:
- # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
- opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
- assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
- opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
- opt.name = 'evolve' if opt.evolve else opt.name
- opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok | opt.evolve) # increment run
-
- # DDP mode
- opt.total_batch_size = opt.batch_size
- device = select_device(opt.device, batch_size=opt.batch_size)
- if opt.local_rank != -1:
- assert torch.cuda.device_count() > opt.local_rank
- torch.cuda.set_device(opt.local_rank)
- device = torch.device('cuda', opt.local_rank)
- dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
- assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
- opt.batch_size = opt.total_batch_size // opt.world_size
-
- # Hyperparameters
- with open(opt.hyp) as f:
- hyp = yaml.load(f, Loader=yaml.SafeLoader) # load hyps
-
- # Train
- logger.info(opt)
- if not opt.evolve:
- tb_writer = None # init loggers
- if opt.global_rank in [-1, 0]:
- prefix = colorstr('tensorboard: ')
- logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/")
- tb_writer = SummaryWriter(opt.save_dir) # Tensorboard
- train(hyp, opt, device, tb_writer)
-
- # Evolve hyperparameters (optional)
- else:
- # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
- meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
- 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
- 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
- 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
- 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
- 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
- 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
- 'box': (1, 0.02, 0.2), # box loss gain
- 'cls': (1, 0.2, 4.0), # cls loss gain
- 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
- 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
- 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
- 'iou_t': (0, 0.1, 0.7), # IoU training threshold
- 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
- 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
- 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
- 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
- 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
- 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
- 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
- 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
- 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
- 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
- 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
- 'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
- 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
- 'mosaic': (1, 0.0, 1.0), # image mixup (probability)
- 'mixup': (1, 0.0, 1.0)} # image mixup (probability)
-
- with open(opt.hyp, errors='ignore') as f:
- hyp = yaml.safe_load(f) # load hyps dict
- if 'anchors' not in hyp: # anchors commented in hyp.yaml
- hyp['anchors'] = 3
-
- assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
- opt.notest, opt.nosave = True, True # only test/save final epoch
- # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
- yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
- if opt.bucket:
- os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
-
- for _ in range(300): # generations to evolve
- if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate
- # Select parent(s)
- parent = 'single' # parent selection method: 'single' or 'weighted'
- x = np.loadtxt('evolve.txt', ndmin=2)
- n = min(5, len(x)) # number of previous results to consider
- x = x[np.argsort(-fitness(x))][:n] # top n mutations
- w = fitness(x) - fitness(x).min() # weights
- if parent == 'single' or len(x) == 1:
- # x = x[random.randint(0, n - 1)] # random selection
- x = x[random.choices(range(n), weights=w)[0]] # weighted selection
- elif parent == 'weighted':
- x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
-
- # Mutate
- mp, s = 0.8, 0.2 # mutation probability, sigma
- npr = np.random
- npr.seed(int(time.time()))
- g = np.array([x[0] for x in meta.values()]) # gains 0-1
- ng = len(meta)
- v = np.ones(ng)
- while all(v == 1): # mutate until a change occurs (prevent duplicates)
- v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
- for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
- hyp[k] = float(x[i + 7] * v[i]) # mutate
-
- # Constrain to limits
- for k, v in meta.items():
- hyp[k] = max(hyp[k], v[1]) # lower limit
- hyp[k] = min(hyp[k], v[2]) # upper limit
- hyp[k] = round(hyp[k], 5) # significant digits
-
- # Train mutation
- results = train(hyp.copy(), opt, device)
-
- # Write mutation results
- print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
-
- # Plot results
- plot_evolution(yaml_file)
- print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n'
- f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}')
diff --git a/cv/detection/yolov7/pytorch/utils/__init__.py b/cv/detection/yolov7/pytorch/utils/__init__.py
deleted file mode 100644
index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# init
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/utils/activations.py b/cv/detection/yolov7/pytorch/utils/activations.py
deleted file mode 100644
index aa3ddf071d28daa3061b6d796cb60cd7a88f557c..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/activations.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Activation functions
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-# SiLU https://arxiv.org/pdf/1606.08415.pdf ----------------------------------------------------------------------------
-class SiLU(nn.Module): # export-friendly version of nn.SiLU()
- @staticmethod
- def forward(x):
- return x * torch.sigmoid(x)
-
-
-class Hardswish(nn.Module): # export-friendly version of nn.Hardswish()
- @staticmethod
- def forward(x):
- # return x * F.hardsigmoid(x) # for torchscript and CoreML
- return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX
-
-
-class MemoryEfficientSwish(nn.Module):
- class F(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x):
- ctx.save_for_backward(x)
- return x * torch.sigmoid(x)
-
- @staticmethod
- def backward(ctx, grad_output):
- x = ctx.saved_tensors[0]
- sx = torch.sigmoid(x)
- return grad_output * (sx * (1 + x * (1 - sx)))
-
- def forward(self, x):
- return self.F.apply(x)
-
-
-# Mish https://github.com/digantamisra98/Mish --------------------------------------------------------------------------
-class Mish(nn.Module):
- @staticmethod
- def forward(x):
- return x * F.softplus(x).tanh()
-
-
-class MemoryEfficientMish(nn.Module):
- class F(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x):
- ctx.save_for_backward(x)
- return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
-
- @staticmethod
- def backward(ctx, grad_output):
- x = ctx.saved_tensors[0]
- sx = torch.sigmoid(x)
- fx = F.softplus(x).tanh()
- return grad_output * (fx + x * sx * (1 - fx * fx))
-
- def forward(self, x):
- return self.F.apply(x)
-
-
-# FReLU https://arxiv.org/abs/2007.11824 -------------------------------------------------------------------------------
-class FReLU(nn.Module):
- def __init__(self, c1, k=3): # ch_in, kernel
- super().__init__()
- self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
- self.bn = nn.BatchNorm2d(c1)
-
- def forward(self, x):
- return torch.max(x, self.bn(self.conv(x)))
diff --git a/cv/detection/yolov7/pytorch/utils/add_nms.py b/cv/detection/yolov7/pytorch/utils/add_nms.py
deleted file mode 100644
index 0a1f7976a2051d07bb028f9fd68eb52f45234f43..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/add_nms.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import numpy as np
-import onnx
-from onnx import shape_inference
-try:
- import onnx_graphsurgeon as gs
-except Exception as e:
- print('Import onnx_graphsurgeon failure: %s' % e)
-
-import logging
-
-LOGGER = logging.getLogger(__name__)
-
-class RegisterNMS(object):
- def __init__(
- self,
- onnx_model_path: str,
- precision: str = "fp32",
- ):
-
- self.graph = gs.import_onnx(onnx.load(onnx_model_path))
- assert self.graph
- LOGGER.info("ONNX graph created successfully")
- # Fold constants via ONNX-GS that PyTorch2ONNX may have missed
- self.graph.fold_constants()
- self.precision = precision
- self.batch_size = 1
- def infer(self):
- """
- Sanitize the graph by cleaning any unconnected nodes, do a topological resort,
- and fold constant inputs values. When possible, run shape inference on the
- ONNX graph to determine tensor shapes.
- """
- for _ in range(3):
- count_before = len(self.graph.nodes)
-
- self.graph.cleanup().toposort()
- try:
- for node in self.graph.nodes:
- for o in node.outputs:
- o.shape = None
- model = gs.export_onnx(self.graph)
- model = shape_inference.infer_shapes(model)
- self.graph = gs.import_onnx(model)
- except Exception as e:
- LOGGER.info(f"Shape inference could not be performed at this time:\n{e}")
- try:
- self.graph.fold_constants(fold_shapes=True)
- except TypeError as e:
- LOGGER.error(
- "This version of ONNX GraphSurgeon does not support folding shapes, "
- f"please upgrade your onnx_graphsurgeon module. Error:\n{e}"
- )
- raise
-
- count_after = len(self.graph.nodes)
- if count_before == count_after:
- # No new folding occurred in this iteration, so we can stop for now.
- break
-
- def save(self, output_path):
- """
- Save the ONNX model to the given location.
- Args:
- output_path: Path pointing to the location where to write
- out the updated ONNX model.
- """
- self.graph.cleanup().toposort()
- model = gs.export_onnx(self.graph)
- onnx.save(model, output_path)
- LOGGER.info(f"Saved ONNX model to {output_path}")
-
- def register_nms(
- self,
- *,
- score_thresh: float = 0.25,
- nms_thresh: float = 0.45,
- detections_per_img: int = 100,
- ):
- """
- Register the ``EfficientNMS_TRT`` plugin node.
- NMS expects these shapes for its input tensors:
- - box_net: [batch_size, number_boxes, 4]
- - class_net: [batch_size, number_boxes, number_labels]
- Args:
- score_thresh (float): The scalar threshold for score (low scoring boxes are removed).
- nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU
- overlap with previously selected boxes are removed).
- detections_per_img (int): Number of best detections to keep after NMS.
- """
-
- self.infer()
- # Find the concat node at the end of the network
- op_inputs = self.graph.outputs
- op = "EfficientNMS_TRT"
- attrs = {
- "plugin_version": "1",
- "background_class": -1, # no background class
- "max_output_boxes": detections_per_img,
- "score_threshold": score_thresh,
- "iou_threshold": nms_thresh,
- "score_activation": False,
- "box_coding": 0,
- }
-
- if self.precision == "fp32":
- dtype_output = np.float32
- elif self.precision == "fp16":
- dtype_output = np.float16
- else:
- raise NotImplementedError(f"Currently not supports precision: {self.precision}")
-
- # NMS Outputs
- output_num_detections = gs.Variable(
- name="num_dets",
- dtype=np.int32,
- shape=[self.batch_size, 1],
- ) # A scalar indicating the number of valid detections per batch image.
- output_boxes = gs.Variable(
- name="det_boxes",
- dtype=dtype_output,
- shape=[self.batch_size, detections_per_img, 4],
- )
- output_scores = gs.Variable(
- name="det_scores",
- dtype=dtype_output,
- shape=[self.batch_size, detections_per_img],
- )
- output_labels = gs.Variable(
- name="det_classes",
- dtype=np.int32,
- shape=[self.batch_size, detections_per_img],
- )
-
- op_outputs = [output_num_detections, output_boxes, output_scores, output_labels]
-
- # Create the NMS Plugin node with the selected inputs. The outputs of the node will also
- # become the final outputs of the graph.
- self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs)
- LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}")
-
- self.graph.outputs = op_outputs
-
- self.infer()
-
- def save(self, output_path):
- """
- Save the ONNX model to the given location.
- Args:
- output_path: Path pointing to the location where to write
- out the updated ONNX model.
- """
- self.graph.cleanup().toposort()
- model = gs.export_onnx(self.graph)
- onnx.save(model, output_path)
- LOGGER.info(f"Saved ONNX model to {output_path}")
diff --git a/cv/detection/yolov7/pytorch/utils/autoanchor.py b/cv/detection/yolov7/pytorch/utils/autoanchor.py
deleted file mode 100644
index f491032e53ab43cd81d966d127bd92f9b414b9fe..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/autoanchor.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Auto-anchor utils
-
-import numpy as np
-import torch
-import yaml
-from scipy.cluster.vq import kmeans
-from tqdm import tqdm
-
-from utils.general import colorstr
-
-
-def check_anchor_order(m):
- # Check anchor order against stride order for YOLO Detect() module m, and correct if necessary
- a = m.anchor_grid.prod(-1).view(-1) # anchor area
- da = a[-1] - a[0] # delta a
- ds = m.stride[-1] - m.stride[0] # delta s
- if da.sign() != ds.sign(): # same order
- print('Reversing anchor order')
- m.anchors[:] = m.anchors.flip(0)
- m.anchor_grid[:] = m.anchor_grid.flip(0)
-
-
-def check_anchors(dataset, model, thr=4.0, imgsz=640):
- # Check anchor fit to data, recompute if necessary
- prefix = colorstr('autoanchor: ')
- print(f'\n{prefix}Analyzing anchors... ', end='')
- m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
- wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
-
- def metric(k): # compute metric
- r = wh[:, None] / k[None]
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
- best = x.max(1)[0] # best_x
- aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold
- bpr = (best > 1. / thr).float().mean() # best possible recall
- return bpr, aat
-
- anchors = m.anchor_grid.clone().cpu().view(-1, 2) # current anchors
- bpr, aat = metric(anchors)
- print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='')
- if bpr < 0.98: # threshold to recompute
- print('. Attempting to improve anchors, please wait...')
- na = m.anchor_grid.numel() // 2 # number of anchors
- try:
- anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
- except Exception as e:
- print(f'{prefix}ERROR: {e}')
- new_bpr = metric(anchors)[0]
- if new_bpr > bpr: # replace anchors
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
- m.anchor_grid[:] = anchors.clone().view_as(m.anchor_grid) # for inference
- check_anchor_order(m)
- m.anchors[:] = anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss
- print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.')
- else:
- print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.')
- print('') # newline
-
-
-def kmean_anchors(path='./data/coco.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
- """ Creates kmeans-evolved anchors from training dataset
-
- Arguments:
- path: path to dataset *.yaml, or a loaded dataset
- n: number of anchors
- img_size: image size used for training
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
- gen: generations to evolve anchors using genetic algorithm
- verbose: print all results
-
- Return:
- k: kmeans evolved anchors
-
- Usage:
- from utils.autoanchor import *; _ = kmean_anchors()
- """
- thr = 1. / thr
- prefix = colorstr('autoanchor: ')
-
- def metric(k, wh): # compute metrics
- r = wh[:, None] / k[None]
- x = torch.min(r, 1. / r).min(2)[0] # ratio metric
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
- return x, x.max(1)[0] # x, best_x
-
- def anchor_fitness(k): # mutation fitness
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
- return (best * (best > thr).float()).mean() # fitness
-
- def print_results(k):
- k = k[np.argsort(k.prod(1))] # sort small to large
- x, best = metric(k, wh0)
- bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
- print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr')
- print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, '
- f'past_thr={x[x > thr].mean():.3f}-mean: ', end='')
- for i, x in enumerate(k):
- print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg
- return k
-
- if isinstance(path, str): # *.yaml file
- with open(path) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict
- from utils.datasets import LoadImagesAndLabels
- dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
- else:
- dataset = path # dataset
-
- # Get label wh
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
-
- # Filter
- i = (wh0 < 3.0).any(1).sum()
- if i:
- print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
- wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
- # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
-
- # Kmeans calculation
- print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...')
- s = wh.std(0) # sigmas for whitening
- k, dist = kmeans(wh / s, n, iter=30) # points, mean distance
- assert len(k) == n, print(f'{prefix}ERROR: scipy.cluster.vq.kmeans requested {n} points but returned only {len(k)}')
- k *= s
- wh = torch.tensor(wh, dtype=torch.float32) # filtered
- wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered
- k = print_results(k)
-
- # Plot
- # k, d = [None] * 20, [None] * 20
- # for i in tqdm(range(1, 21)):
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
- # ax = ax.ravel()
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
- # fig.savefig('wh.png', dpi=200)
-
- # Evolve
- npr = np.random
- f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
- pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar
- for _ in pbar:
- v = np.ones(sh)
- while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
- v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
- kg = (k.copy() * v).clip(min=2.0)
- fg = anchor_fitness(kg)
- if fg > f:
- f, k = fg, kg.copy()
- pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
- if verbose:
- print_results(k)
-
- return print_results(k)
diff --git a/cv/detection/yolov7/pytorch/utils/aws/__init__.py b/cv/detection/yolov7/pytorch/utils/aws/__init__.py
deleted file mode 100644
index e9691f241edc06ad981b36ca27f7eff9e46686ed..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/aws/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-#init
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/utils/aws/mime.sh b/cv/detection/yolov7/pytorch/utils/aws/mime.sh
deleted file mode 100644
index c319a83cfbdf09bea634c3bd9fca737c0b1dd505..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/aws/mime.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
-# This script will run on every instance restart, not only on first start
-# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---
-
-Content-Type: multipart/mixed; boundary="//"
-MIME-Version: 1.0
-
---//
-Content-Type: text/cloud-config; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="cloud-config.txt"
-
-#cloud-config
-cloud_final_modules:
-- [scripts-user, always]
-
---//
-Content-Type: text/x-shellscript; charset="us-ascii"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-Content-Disposition: attachment; filename="userdata.txt"
-
-#!/bin/bash
-# --- paste contents of userdata.sh here ---
---//
diff --git a/cv/detection/yolov7/pytorch/utils/aws/resume.py b/cv/detection/yolov7/pytorch/utils/aws/resume.py
deleted file mode 100644
index 338685b19c19ddb47aa2fde22a535a8efcf17802..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/aws/resume.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Resume all interrupted trainings in yolor/ dir including DDP trainings
-# Usage: $ python utils/aws/resume.py
-
-import os
-import sys
-from pathlib import Path
-
-import torch
-import yaml
-
-sys.path.append('./') # to run '$ python *.py' files in subdirectories
-
-port = 0 # --master_port
-path = Path('').resolve()
-for last in path.rglob('*/**/last.pt'):
- ckpt = torch.load(last)
- if ckpt['optimizer'] is None:
- continue
-
- # Load opt.yaml
- with open(last.parent.parent / 'opt.yaml') as f:
- opt = yaml.load(f, Loader=yaml.SafeLoader)
-
- # Get device count
- d = opt['device'].split(',') # devices
- nd = len(d) # number of devices
- ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel
-
- if ddp: # multi-GPU
- port += 1
- cmd = f'python -m torch.distributed.launch --nproc_per_node {nd} --master_port {port} train.py --resume {last}'
- else: # single-GPU
- cmd = f'python train.py --resume {last}'
-
- cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread
- print(cmd)
- os.system(cmd)
diff --git a/cv/detection/yolov7/pytorch/utils/aws/userdata.sh b/cv/detection/yolov7/pytorch/utils/aws/userdata.sh
deleted file mode 100644
index 5a99d4bec7400a08069ce40e8b02928d4b4e06ee..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/aws/userdata.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
-# This script will run only once on first instance start (for a re-start script see mime.sh)
-# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir
-# Use >300 GB SSD
-
-cd home/ubuntu
-if [ ! -d yolor ]; then
- echo "Running first-time script." # install dependencies, download COCO, pull Docker
- git clone -b main https://github.com/WongKinYiu/yolov7 && sudo chmod -R 777 yolov7
- cd yolov7
- bash data/scripts/get_coco.sh && echo "Data done." &
- sudo docker pull nvcr.io/nvidia/pytorch:21.08-py3 && echo "Docker done." &
- python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." &
- wait && echo "All tasks done." # finish background tasks
-else
- echo "Running re-start script." # resume interrupted runs
- i=0
- list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour'
- while IFS= read -r id; do
- ((i++))
- echo "restarting container $i: $id"
- sudo docker start $id
- # sudo docker exec -it $id python train.py --resume # single-GPU
- sudo docker exec -d $id python utils/aws/resume.py # multi-scenario
- done <<<"$list"
-fi
diff --git a/cv/detection/yolov7/pytorch/utils/datasets.py b/cv/detection/yolov7/pytorch/utils/datasets.py
deleted file mode 100644
index 5fe4f7bcc28a91e83313c5372029928d0b8c0fd5..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/datasets.py
+++ /dev/null
@@ -1,1320 +0,0 @@
-# Dataset utils and dataloaders
-
-import glob
-import logging
-import math
-import os
-import random
-import shutil
-import time
-from itertools import repeat
-from multiprocessing.pool import ThreadPool
-from pathlib import Path
-from threading import Thread
-
-import cv2
-import numpy as np
-import torch
-import torch.nn.functional as F
-from PIL import Image, ExifTags
-from torch.utils.data import Dataset
-from tqdm import tqdm
-
-import pickle
-from copy import deepcopy
-#from pycocotools import mask as maskUtils
-from torchvision.utils import save_image
-from torchvision.ops import roi_pool, roi_align, ps_roi_pool, ps_roi_align
-
-from utils.general import check_requirements, xyxy2xywh, xywh2xyxy, xywhn2xyxy, xyn2xy, segment2box, segments2boxes, \
- resample_segments, clean_str
-from utils.torch_utils import torch_distributed_zero_first
-
-# Parameters
-help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
-img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes
-vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes
-logger = logging.getLogger(__name__)
-
-# Get orientation exif tag
-for orientation in ExifTags.TAGS.keys():
- if ExifTags.TAGS[orientation] == 'Orientation':
- break
-
-
-def get_hash(files):
- # Returns a single hash value of a list of files
- return sum(os.path.getsize(f) for f in files if os.path.isfile(f))
-
-
-def exif_size(img):
- # Returns exif-corrected PIL size
- s = img.size # (width, height)
- try:
- rotation = dict(img._getexif().items())[orientation]
- if rotation == 6: # rotation 270
- s = (s[1], s[0])
- elif rotation == 8: # rotation 90
- s = (s[1], s[0])
- except:
- pass
-
- return s
-
-
-def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False,
- rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''):
- # Make sure only the first process in DDP process the dataset first, and the following others can use the cache
- with torch_distributed_zero_first(rank):
- dataset = LoadImagesAndLabels(path, imgsz, batch_size,
- augment=augment, # augment images
- hyp=hyp, # augmentation hyperparameters
- rect=rect, # rectangular training
- cache_images=cache,
- single_cls=opt.single_cls,
- stride=int(stride),
- pad=pad,
- image_weights=image_weights,
- prefix=prefix)
-
- batch_size = min(batch_size, len(dataset))
- nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers
- sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None
- loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader
- # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader()
- dataloader = loader(dataset,
- batch_size=batch_size,
- num_workers=nw,
- sampler=sampler,
- pin_memory=True,
- collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn)
- return dataloader, dataset
-
-
-class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader):
- """ Dataloader that reuses workers
-
- Uses same syntax as vanilla DataLoader
- """
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler))
- self.iterator = super().__iter__()
-
- def __len__(self):
- return len(self.batch_sampler.sampler)
-
- def __iter__(self):
- for i in range(len(self)):
- yield next(self.iterator)
-
-
-class _RepeatSampler(object):
- """ Sampler that repeats forever
-
- Args:
- sampler (Sampler)
- """
-
- def __init__(self, sampler):
- self.sampler = sampler
-
- def __iter__(self):
- while True:
- yield from iter(self.sampler)
-
-
-class LoadImages: # for inference
- def __init__(self, path, img_size=640, stride=32):
- p = str(Path(path).absolute()) # os-agnostic absolute path
- if '*' in p:
- files = sorted(glob.glob(p, recursive=True)) # glob
- elif os.path.isdir(p):
- files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir
- elif os.path.isfile(p):
- files = [p] # files
- else:
- raise Exception(f'ERROR: {p} does not exist')
-
- images = [x for x in files if x.split('.')[-1].lower() in img_formats]
- videos = [x for x in files if x.split('.')[-1].lower() in vid_formats]
- ni, nv = len(images), len(videos)
-
- self.img_size = img_size
- self.stride = stride
- self.files = images + videos
- self.nf = ni + nv # number of files
- self.video_flag = [False] * ni + [True] * nv
- self.mode = 'image'
- if any(videos):
- self.new_video(videos[0]) # new video
- else:
- self.cap = None
- assert self.nf > 0, f'No images or videos found in {p}. ' \
- f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}'
-
- def __iter__(self):
- self.count = 0
- return self
-
- def __next__(self):
- if self.count == self.nf:
- raise StopIteration
- path = self.files[self.count]
-
- if self.video_flag[self.count]:
- # Read video
- self.mode = 'video'
- ret_val, img0 = self.cap.read()
- if not ret_val:
- self.count += 1
- self.cap.release()
- if self.count == self.nf: # last video
- raise StopIteration
- else:
- path = self.files[self.count]
- self.new_video(path)
- ret_val, img0 = self.cap.read()
-
- self.frame += 1
- print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ', end='')
-
- else:
- # Read image
- self.count += 1
- img0 = cv2.imread(path) # BGR
- assert img0 is not None, 'Image Not Found ' + path
- #print(f'image {self.count}/{self.nf} {path}: ', end='')
-
- # Padded resize
- img = letterbox(img0, self.img_size, stride=self.stride)[0]
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
-
- return path, img, img0, self.cap
-
- def new_video(self, path):
- self.frame = 0
- self.cap = cv2.VideoCapture(path)
- self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))
-
- def __len__(self):
- return self.nf # number of files
-
-
-class LoadWebcam: # for inference
- def __init__(self, pipe='0', img_size=640, stride=32):
- self.img_size = img_size
- self.stride = stride
-
- if pipe.isnumeric():
- pipe = eval(pipe) # local camera
- # pipe = 'rtsp://192.168.1.64/1' # IP camera
- # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login
- # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera
-
- self.pipe = pipe
- self.cap = cv2.VideoCapture(pipe) # video capture object
- self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size
-
- def __iter__(self):
- self.count = -1
- return self
-
- def __next__(self):
- self.count += 1
- if cv2.waitKey(1) == ord('q'): # q to quit
- self.cap.release()
- cv2.destroyAllWindows()
- raise StopIteration
-
- # Read frame
- if self.pipe == 0: # local camera
- ret_val, img0 = self.cap.read()
- img0 = cv2.flip(img0, 1) # flip left-right
- else: # IP camera
- n = 0
- while True:
- n += 1
- self.cap.grab()
- if n % 30 == 0: # skip frames
- ret_val, img0 = self.cap.retrieve()
- if ret_val:
- break
-
- # Print
- assert ret_val, f'Camera Error {self.pipe}'
- img_path = 'webcam.jpg'
- print(f'webcam {self.count}: ', end='')
-
- # Padded resize
- img = letterbox(img0, self.img_size, stride=self.stride)[0]
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
-
- return img_path, img, img0, None
-
- def __len__(self):
- return 0
-
-
-class LoadStreams: # multiple IP or RTSP cameras
- def __init__(self, sources='streams.txt', img_size=640, stride=32):
- self.mode = 'stream'
- self.img_size = img_size
- self.stride = stride
-
- if os.path.isfile(sources):
- with open(sources, 'r') as f:
- sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())]
- else:
- sources = [sources]
-
- n = len(sources)
- self.imgs = [None] * n
- self.sources = [clean_str(x) for x in sources] # clean source names for later
- for i, s in enumerate(sources):
- # Start the thread to read frames from the video stream
- print(f'{i + 1}/{n}: {s}... ', end='')
- url = eval(s) if s.isnumeric() else s
- if 'youtube.com/' in str(url) or 'youtu.be/' in str(url): # if source is YouTube video
- check_requirements(('pafy', 'youtube_dl'))
- import pafy
- url = pafy.new(url).getbest(preftype="mp4").url
- cap = cv2.VideoCapture(url)
- assert cap.isOpened(), f'Failed to open {s}'
- w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- self.fps = cap.get(cv2.CAP_PROP_FPS) % 100
-
- _, self.imgs[i] = cap.read() # guarantee first frame
- thread = Thread(target=self.update, args=([i, cap]), daemon=True)
- print(f' success ({w}x{h} at {self.fps:.2f} FPS).')
- thread.start()
- print('') # newline
-
- # check for common shapes
- s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes
- self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
- if not self.rect:
- print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
-
- def update(self, index, cap):
- # Read next stream frame in a daemon thread
- n = 0
- while cap.isOpened():
- n += 1
- # _, self.imgs[index] = cap.read()
- cap.grab()
- if n == 4: # read every 4th frame
- success, im = cap.retrieve()
- self.imgs[index] = im if success else self.imgs[index] * 0
- n = 0
- time.sleep(1 / self.fps) # wait time
-
- def __iter__(self):
- self.count = -1
- return self
-
- def __next__(self):
- self.count += 1
- img0 = self.imgs.copy()
- if cv2.waitKey(1) == ord('q'): # q to quit
- cv2.destroyAllWindows()
- raise StopIteration
-
- # Letterbox
- img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0]
-
- # Stack
- img = np.stack(img, 0)
-
- # Convert
- img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416
- img = np.ascontiguousarray(img)
-
- return self.sources, img, img0, None
-
- def __len__(self):
- return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years
-
-
-def img2label_paths(img_paths):
- # Define label paths as a function of image paths
- sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings
- return ['txt'.join(x.replace(sa, sb, 1).rsplit(x.split('.')[-1], 1)) for x in img_paths]
-
-
-class LoadImagesAndLabels(Dataset): # for training/testing
- def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False,
- cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''):
- self.img_size = img_size
- self.augment = augment
- self.hyp = hyp
- self.image_weights = image_weights
- self.rect = False if image_weights else rect
- self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training)
- self.mosaic_border = [-img_size // 2, -img_size // 2]
- self.stride = stride
- self.path = path
- #self.albumentations = Albumentations() if augment else None
-
- try:
- f = [] # image files
- for p in path if isinstance(path, list) else [path]:
- p = Path(p) # os-agnostic
- if p.is_dir(): # dir
- f += glob.glob(str(p / '**' / '*.*'), recursive=True)
- # f = list(p.rglob('**/*.*')) # pathlib
- elif p.is_file(): # file
- with open(p, 'r') as t:
- t = t.read().strip().splitlines()
- parent = str(p.parent) + os.sep
- f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path
- # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib)
- else:
- raise Exception(f'{prefix}{p} does not exist')
- self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats])
- # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib
- assert self.img_files, f'{prefix}No images found'
- except Exception as e:
- raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}')
-
- # Check cache
- self.label_files = img2label_paths(self.img_files) # labels
- cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels
- if cache_path.is_file():
- cache, exists = torch.load(cache_path), True # load
- #if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed
- # cache, exists = self.cache_labels(cache_path, prefix), False # re-cache
- else:
- cache, exists = self.cache_labels(cache_path, prefix), False # cache
-
- # Display cache
- nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total
- if exists:
- d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted"
- tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results
- assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}'
-
- # Read cache
- cache.pop('hash') # remove hash
- cache.pop('version') # remove version
- labels, shapes, self.segments = zip(*cache.values())
- self.labels = list(labels)
- self.shapes = np.array(shapes, dtype=np.float64)
- self.img_files = list(cache.keys()) # update
- self.label_files = img2label_paths(cache.keys()) # update
- if single_cls:
- for x in self.labels:
- x[:, 0] = 0
-
- n = len(shapes) # number of images
- bi = np.floor(np.arange(n) / batch_size).astype(int) # batch index
- nb = bi[-1] + 1 # number of batches
- self.batch = bi # batch index of image
- self.n = n
- self.indices = range(n)
-
- # Rectangular Training
- if self.rect:
- # Sort by aspect ratio
- s = self.shapes # wh
- ar = s[:, 1] / s[:, 0] # aspect ratio
- irect = ar.argsort()
- self.img_files = [self.img_files[i] for i in irect]
- self.label_files = [self.label_files[i] for i in irect]
- self.labels = [self.labels[i] for i in irect]
- self.shapes = s[irect] # wh
- ar = ar[irect]
-
- # Set training image shapes
- shapes = [[1, 1]] * nb
- for i in range(nb):
- ari = ar[bi == i]
- mini, maxi = ari.min(), ari.max()
- if maxi < 1:
- shapes[i] = [maxi, 1]
- elif mini > 1:
- shapes[i] = [1, 1 / mini]
-
- self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(int) * stride
-
- # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
- self.imgs = [None] * n
- if cache_images:
- if cache_images == 'disk':
- self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy')
- self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files]
- self.im_cache_dir.mkdir(parents=True, exist_ok=True)
- gb = 0 # Gigabytes of cached images
- self.img_hw0, self.img_hw = [None] * n, [None] * n
- results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n)))
- pbar = tqdm(enumerate(results), total=n)
- for i, x in pbar:
- if cache_images == 'disk':
- if not self.img_npy[i].exists():
- np.save(self.img_npy[i].as_posix(), x[0])
- gb += self.img_npy[i].stat().st_size
- else:
- self.imgs[i], self.img_hw0[i], self.img_hw[i] = x
- gb += self.imgs[i].nbytes
- pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)'
- pbar.close()
-
- def cache_labels(self, path=Path('./labels.cache'), prefix=''):
- # Cache dataset labels, check images and read shapes
- x = {} # dict
- nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate
- pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files))
- for i, (im_file, lb_file) in enumerate(pbar):
- try:
- # verify images
- im = Image.open(im_file)
- im.verify() # PIL verify
- shape = exif_size(im) # image size
- segments = [] # instance segments
- assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels'
- assert im.format.lower() in img_formats, f'invalid image format {im.format}'
-
- # verify labels
- if os.path.isfile(lb_file):
- nf += 1 # label found
- with open(lb_file, 'r') as f:
- l = [x.split() for x in f.read().strip().splitlines()]
- if any([len(x) > 8 for x in l]): # is segment
- classes = np.array([x[0] for x in l], dtype=np.float32)
- segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...)
- l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh)
- l = np.array(l, dtype=np.float32)
- if len(l):
- assert l.shape[1] == 5, 'labels require 5 columns each'
- assert (l >= 0).all(), 'negative labels'
- assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels'
- assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels'
- else:
- ne += 1 # label empty
- l = np.zeros((0, 5), dtype=np.float32)
- else:
- nm += 1 # label missing
- l = np.zeros((0, 5), dtype=np.float32)
- x[im_file] = [l, shape, segments]
- except Exception as e:
- nc += 1
- print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}')
-
- pbar.desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels... " \
- f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted"
- pbar.close()
-
- if nf == 0:
- print(f'{prefix}WARNING: No labels found in {path}. See {help_url}')
-
- x['hash'] = get_hash(self.label_files + self.img_files)
- x['results'] = nf, nm, ne, nc, i + 1
- x['version'] = 0.1 # cache version
- torch.save(x, path) # save for next time
- logging.info(f'{prefix}New cache created: {path}')
- return x
-
- def __len__(self):
- return len(self.img_files)
-
- # def __iter__(self):
- # self.count = -1
- # print('ran dataset iter')
- # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
- # return self
-
- def __getitem__(self, index):
- index = self.indices[index] # linear, shuffled, or image_weights
-
- hyp = self.hyp
- mosaic = self.mosaic and random.random() < hyp['mosaic']
- if mosaic:
- # Load mosaic
- if random.random() < 0.8:
- img, labels = load_mosaic(self, index)
- else:
- img, labels = load_mosaic9(self, index)
- shapes = None
-
- # MixUp https://arxiv.org/pdf/1710.09412.pdf
- if random.random() < hyp['mixup']:
- if random.random() < 0.8:
- img2, labels2 = load_mosaic(self, random.randint(0, len(self.labels) - 1))
- else:
- img2, labels2 = load_mosaic9(self, random.randint(0, len(self.labels) - 1))
- r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0
- img = (img * r + img2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
-
- else:
- # Load image
- img, (h0, w0), (h, w) = load_image(self, index)
-
- # Letterbox
- shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape
- img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment)
- shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling
-
- labels = self.labels[index].copy()
- if labels.size: # normalized xywh to pixel xyxy format
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1])
-
- if self.augment:
- # Augment imagespace
- if not mosaic:
- img, labels = random_perspective(img, labels,
- degrees=hyp['degrees'],
- translate=hyp['translate'],
- scale=hyp['scale'],
- shear=hyp['shear'],
- perspective=hyp['perspective'])
-
-
- #img, labels = self.albumentations(img, labels)
-
- # Augment colorspace
- augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v'])
-
- # Apply cutouts
- # if random.random() < 0.9:
- # labels = cutout(img, labels)
-
- if random.random() < hyp['paste_in']:
- sample_labels, sample_images, sample_masks = [], [], []
- while len(sample_labels) < 30:
- sample_labels_, sample_images_, sample_masks_ = load_samples(self, random.randint(0, len(self.labels) - 1))
- sample_labels += sample_labels_
- sample_images += sample_images_
- sample_masks += sample_masks_
- #print(len(sample_labels))
- if len(sample_labels) == 0:
- break
- labels = pastein(img, labels, sample_labels, sample_images, sample_masks)
-
- nL = len(labels) # number of labels
- if nL:
- labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh
- labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1
- labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1
-
- if self.augment:
- # flip up-down
- if random.random() < hyp['flipud']:
- img = np.flipud(img)
- if nL:
- labels[:, 2] = 1 - labels[:, 2]
-
- # flip left-right
- if random.random() < hyp['fliplr']:
- img = np.fliplr(img)
- if nL:
- labels[:, 1] = 1 - labels[:, 1]
-
- labels_out = torch.zeros((nL, 6))
- if nL:
- labels_out[:, 1:] = torch.from_numpy(labels)
-
- # Convert
- img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- img = np.ascontiguousarray(img)
-
- return torch.from_numpy(img), labels_out, self.img_files[index], shapes
-
- @staticmethod
- def collate_fn(batch):
- img, label, path, shapes = zip(*batch) # transposed
- for i, l in enumerate(label):
- l[:, 0] = i # add target image index for build_targets()
- return torch.stack(img, 0), torch.cat(label, 0), path, shapes
-
- @staticmethod
- def collate_fn4(batch):
- img, label, path, shapes = zip(*batch) # transposed
- n = len(shapes) // 4
- img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n]
-
- ho = torch.tensor([[0., 0, 0, 1, 0, 0]])
- wo = torch.tensor([[0., 0, 1, 0, 0, 0]])
- s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale
- for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW
- i *= 4
- if random.random() < 0.5:
- im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[
- 0].type(img[i].type())
- l = label[i]
- else:
- im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2)
- l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s
- img4.append(im)
- label4.append(l)
-
- for i, l in enumerate(label4):
- l[:, 0] = i # add target image index for build_targets()
-
- return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4
-
-
-# Ancillary functions --------------------------------------------------------------------------------------------------
-def load_image(self, index):
- # loads 1 image from dataset, returns img, original hw, resized hw
- img = self.imgs[index]
- if img is None: # not cached
- path = self.img_files[index]
- img = cv2.imread(path) # BGR
- assert img is not None, 'Image Not Found ' + path
- h0, w0 = img.shape[:2] # orig hw
- r = self.img_size / max(h0, w0) # resize image to img_size
- if r != 1: # always resize down, only resize up if training with augmentation
- interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR
- img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp)
- return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized
- else:
- return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized
-
-
-def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5):
- r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
- hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV))
- dtype = img.dtype # uint8
-
- x = np.arange(0, 256, dtype=np.int16)
- lut_hue = ((x * r[0]) % 180).astype(dtype)
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
-
- img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype)
- cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed
-
-
-def hist_equalize(img, clahe=True, bgr=False):
- # Equalize histogram on BGR image 'img' with img.shape(n,m,3) and range 0-255
- yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
- if clahe:
- c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
- yuv[:, :, 0] = c.apply(yuv[:, :, 0])
- else:
- yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram
- return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB
-
-
-def load_mosaic(self, index):
- # loads images in a 4-mosaic
-
- labels4, segments4 = [], []
- s = self.img_size
- yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
- indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img4
- if i == 0: # top left
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
- labels4.append(labels)
- segments4.extend(segments)
-
- # Concat/clip labels
- labels4 = np.concatenate(labels4, 0)
- for x in (labels4[:, 1:], *segments4):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img4, labels4 = replicate(img4, labels4) # replicate
-
- # Augment
- #img4, labels4, segments4 = remove_background(img4, labels4, segments4)
- #sample_segments(img4, labels4, segments4, probability=self.hyp['copy_paste'])
- img4, labels4, segments4 = copy_paste(img4, labels4, segments4, probability=self.hyp['copy_paste'])
- img4, labels4 = random_perspective(img4, labels4, segments4,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
-
- return img4, labels4
-
-
-def load_mosaic9(self, index):
- # loads images in a 9-mosaic
-
- labels9, segments9 = [], []
- s = self.img_size
- indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img9
- if i == 0: # center
- img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- h0, w0 = h, w
- c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates
- elif i == 1: # top
- c = s, s - h, s + w, s
- elif i == 2: # top right
- c = s + wp, s - h, s + wp + w, s
- elif i == 3: # right
- c = s + w0, s, s + w0 + w, s + h
- elif i == 4: # bottom right
- c = s + w0, s + hp, s + w0 + w, s + hp + h
- elif i == 5: # bottom
- c = s + w0 - w, s + h0, s + w0, s + h0 + h
- elif i == 6: # bottom left
- c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h
- elif i == 7: # left
- c = s - w, s + h0 - h, s, s + h0
- elif i == 8: # top left
- c = s - w, s + h0 - hp - h, s, s + h0 - hp
-
- padx, pady = c[:2]
- x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padx, pady) for x in segments]
- labels9.append(labels)
- segments9.extend(segments)
-
- # Image
- img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax]
- hp, wp = h, w # height, width previous
-
- # Offset
- yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y
- img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s]
-
- # Concat/clip labels
- labels9 = np.concatenate(labels9, 0)
- labels9[:, [1, 3]] -= xc
- labels9[:, [2, 4]] -= yc
- c = np.array([xc, yc]) # centers
- segments9 = [x - c for x in segments9]
-
- for x in (labels9[:, 1:], *segments9):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img9, labels9 = replicate(img9, labels9) # replicate
-
- # Augment
- #img9, labels9, segments9 = remove_background(img9, labels9, segments9)
- img9, labels9, segments9 = copy_paste(img9, labels9, segments9, probability=self.hyp['copy_paste'])
- img9, labels9 = random_perspective(img9, labels9, segments9,
- degrees=self.hyp['degrees'],
- translate=self.hyp['translate'],
- scale=self.hyp['scale'],
- shear=self.hyp['shear'],
- perspective=self.hyp['perspective'],
- border=self.mosaic_border) # border to remove
-
- return img9, labels9
-
-
-def load_samples(self, index):
- # loads images in a 4-mosaic
-
- labels4, segments4 = [], []
- s = self.img_size
- yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
- indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices
- for i, index in enumerate(indices):
- # Load image
- img, _, (h, w) = load_image(self, index)
-
- # place img in img4
- if i == 0: # top left
- img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
- x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
- elif i == 1: # top right
- x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
- x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
- elif i == 2: # bottom left
- x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
- elif i == 3: # bottom right
- x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
- x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
-
- img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- padw = x1a - x1b
- padh = y1a - y1b
-
- # Labels
- labels, segments = self.labels[index].copy(), self.segments[index].copy()
- if labels.size:
- labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
- segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
- labels4.append(labels)
- segments4.extend(segments)
-
- # Concat/clip labels
- labels4 = np.concatenate(labels4, 0)
- for x in (labels4[:, 1:], *segments4):
- np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
- # img4, labels4 = replicate(img4, labels4) # replicate
-
- # Augment
- #img4, labels4, segments4 = remove_background(img4, labels4, segments4)
- sample_labels, sample_images, sample_masks = sample_segments(img4, labels4, segments4, probability=0.5)
-
- return sample_labels, sample_images, sample_masks
-
-
-def copy_paste(img, labels, segments, probability=0.5):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- if probability and n:
- h, w, c = img.shape # height, width, channels
- im_new = np.zeros(img.shape, np.uint8)
- for j in random.sample(range(n), k=round(probability * n)):
- l, s = labels[j], segments[j]
- box = w - l[3], l[2], w - l[1], l[4]
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
- labels = np.concatenate((labels, [[l[0], *box]]), 0)
- segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
- cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
-
- result = cv2.bitwise_and(src1=img, src2=im_new)
- result = cv2.flip(result, 1) # augment segments (flip left-right)
- i = result > 0 # pixels to replace
- # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
- img[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
-
- return img, labels, segments
-
-
-def remove_background(img, labels, segments):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- h, w, c = img.shape # height, width, channels
- im_new = np.zeros(img.shape, np.uint8)
- img_new = np.ones(img.shape, np.uint8) * 114
- for j in range(n):
- cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
-
- result = cv2.bitwise_and(src1=img, src2=im_new)
-
- i = result > 0 # pixels to replace
- img_new[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
-
- return img_new, labels, segments
-
-
-def sample_segments(img, labels, segments, probability=0.5):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- sample_labels = []
- sample_images = []
- sample_masks = []
- if probability and n:
- h, w, c = img.shape # height, width, channels
- for j in random.sample(range(n), k=round(probability * n)):
- l, s = labels[j], segments[j]
- box = l[1].astype(int).clip(0,w-1), l[2].astype(int).clip(0,h-1), l[3].astype(int).clip(0,w-1), l[4].astype(int).clip(0,h-1)
-
- #print(box)
- if (box[2] <= box[0]) or (box[3] <= box[1]):
- continue
-
- sample_labels.append(l[0])
-
- mask = np.zeros(img.shape, np.uint8)
-
- cv2.drawContours(mask, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
- sample_masks.append(mask[box[1]:box[3],box[0]:box[2],:])
-
- result = cv2.bitwise_and(src1=img, src2=mask)
- i = result > 0 # pixels to replace
- mask[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug
- #print(box)
- sample_images.append(mask[box[1]:box[3],box[0]:box[2],:])
-
- return sample_labels, sample_images, sample_masks
-
-
-def replicate(img, labels):
- # Replicate labels
- h, w = img.shape[:2]
- boxes = labels[:, 1:].astype(int)
- x1, y1, x2, y2 = boxes.T
- s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
- for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
- x1b, y1b, x2b, y2b = boxes[i]
- bh, bw = y2b - y1b, x2b - x1b
- yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
- x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
- img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
- labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
-
- return img, labels
-
-
-def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
- # Resize and pad image while meeting stride-multiple constraints
- shape = img.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better test mAP)
- r = min(r, 1.0)
-
- # Compute padding
- ratio = r, r # width, height ratios
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
- elif scaleFill: # stretch
- dw, dh = 0.0, 0.0
- new_unpad = (new_shape[1], new_shape[0])
- ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
- return img, ratio, (dw, dh)
-
-
-def random_perspective(img, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0,
- border=(0, 0)):
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10))
- # targets = [cls, xyxy]
-
- height = img.shape[0] + border[0] * 2 # shape(h,w,c)
- width = img.shape[1] + border[1] * 2
-
- # Center
- C = np.eye(3)
- C[0, 2] = -img.shape[1] / 2 # x translation (pixels)
- C[1, 2] = -img.shape[0] / 2 # y translation (pixels)
-
- # Perspective
- P = np.eye(3)
- P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
- P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1.1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
- S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
- T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
- if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
- if perspective:
- img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114))
- else: # affine
- img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
-
- # Visualize
- # import matplotlib.pyplot as plt
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
- # ax[0].imshow(img[:, :, ::-1]) # base
- # ax[1].imshow(img2[:, :, ::-1]) # warped
-
- # Transform label coordinates
- n = len(targets)
- if n:
- use_segments = any(x.any() for x in segments)
- new = np.zeros((n, 4))
- if use_segments: # warp segments
- segments = resample_segments(segments) # upsample
- for i, segment in enumerate(segments):
- xy = np.ones((len(segment), 3))
- xy[:, :2] = segment
- xy = xy @ M.T # transform
- xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine
-
- # clip
- new[i] = segment2box(xy, width, height)
-
- else: # warp boxes
- xy = np.ones((n * 4, 3))
- xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
- xy = xy @ M.T # transform
- xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine
-
- # create new boxes
- x = xy[:, [0, 2, 4, 6]]
- y = xy[:, [1, 3, 5, 7]]
- new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
-
- # clip
- new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
- new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
-
- # filter candidates
- i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
- targets = targets[i]
- targets[:, 1:5] = new[i]
-
- return img, targets
-
-
-def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
- # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
- ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
- return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
-
-
-def bbox_ioa(box1, box2):
- # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2
- box2 = box2.transpose()
-
- # Get the coordinates of bounding boxes
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
-
- # Intersection area
- inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \
- (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0)
-
- # box2 area
- box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16
-
- # Intersection over box2 area
- return inter_area / box2_area
-
-
-def cutout(image, labels):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- h, w = image.shape[:2]
-
- # create random masks
- scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
- for s in scales:
- mask_h = random.randint(1, int(h * s))
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- # apply random color mask
- image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
-
- # return unobscured labels
- if len(labels) and s > 0.03:
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- labels = labels[ioa < 0.60] # remove >60% obscured labels
-
- return labels
-
-
-def pastein(image, labels, sample_labels, sample_images, sample_masks):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- h, w = image.shape[:2]
-
- # create random masks
- scales = [0.75] * 2 + [0.5] * 4 + [0.25] * 4 + [0.125] * 4 + [0.0625] * 6 # image size fraction
- for s in scales:
- if random.random() < 0.2:
- continue
- mask_h = random.randint(1, int(h * s))
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- if len(labels):
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- else:
- ioa = np.zeros(1)
-
- if (ioa < 0.30).all() and len(sample_labels) and (xmax > xmin+20) and (ymax > ymin+20): # allow 30% obscuration of existing labels
- sel_ind = random.randint(0, len(sample_labels)-1)
- #print(len(sample_labels))
- #print(sel_ind)
- #print((xmax-xmin, ymax-ymin))
- #print(image[ymin:ymax, xmin:xmax].shape)
- #print([[sample_labels[sel_ind], *box]])
- #print(labels.shape)
- hs, ws, cs = sample_images[sel_ind].shape
- r_scale = min((ymax-ymin)/hs, (xmax-xmin)/ws)
- r_w = int(ws*r_scale)
- r_h = int(hs*r_scale)
-
- if (r_w > 10) and (r_h > 10):
- r_mask = cv2.resize(sample_masks[sel_ind], (r_w, r_h))
- r_image = cv2.resize(sample_images[sel_ind], (r_w, r_h))
- temp_crop = image[ymin:ymin+r_h, xmin:xmin+r_w]
- m_ind = r_mask > 0
- if m_ind.astype(np.int32).sum() > 60:
- temp_crop[m_ind] = r_image[m_ind]
- #print(sample_labels[sel_ind])
- #print(sample_images[sel_ind].shape)
- #print(temp_crop.shape)
- box = np.array([xmin, ymin, xmin+r_w, ymin+r_h], dtype=np.float32)
- if len(labels):
- labels = np.concatenate((labels, [[sample_labels[sel_ind], *box]]), 0)
- else:
- labels = np.array([[sample_labels[sel_ind], *box]])
-
- image[ymin:ymin+r_h, xmin:xmin+r_w] = temp_crop
-
- return labels
-
-class Albumentations:
- # YOLOv5 Albumentations class (optional, only used if package is installed)
- def __init__(self):
- self.transform = None
- import albumentations as A
-
- self.transform = A.Compose([
- A.CLAHE(p=0.01),
- A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.01),
- A.RandomGamma(gamma_limit=[80, 120], p=0.01),
- A.Blur(p=0.01),
- A.MedianBlur(p=0.01),
- A.ToGray(p=0.01),
- A.ImageCompression(quality_lower=75, p=0.01),],
- bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels']))
-
- #logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))
-
- def __call__(self, im, labels, p=1.0):
- if self.transform and random.random() < p:
- new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed
- im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
- return im, labels
-
-
-def create_folder(path='./new'):
- # Create folder
- if os.path.exists(path):
- shutil.rmtree(path) # delete output folder
- os.makedirs(path) # make new output folder
-
-
-def flatten_recursive(path='../coco'):
- # Flatten a recursive directory by bringing all files to top level
- new_path = Path(path + '_flat')
- create_folder(new_path)
- for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)):
- shutil.copyfile(file, new_path / Path(file).name)
-
-
-def extract_boxes(path='../coco/'): # from utils.datasets import *; extract_boxes('../coco128')
- # Convert detection dataset into classification dataset, with one directory per class
-
- path = Path(path) # images dir
- shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing
- files = list(path.rglob('*.*'))
- n = len(files) # number of files
- for im_file in tqdm(files, total=n):
- if im_file.suffix[1:] in img_formats:
- # image
- im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB
- h, w = im.shape[:2]
-
- # labels
- lb_file = Path(img2label_paths([str(im_file)])[0])
- if Path(lb_file).exists():
- with open(lb_file, 'r') as f:
- lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels
-
- for j, x in enumerate(lb):
- c = int(x[0]) # class
- f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename
- if not f.parent.is_dir():
- f.parent.mkdir(parents=True)
-
- b = x[1:] * [w, h, w, h] # box
- # b[2:] = b[2:].max() # rectangle to square
- b[2:] = b[2:] * 1.2 + 3 # pad
- b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int)
-
- b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image
- b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
- assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}'
-
-
-def autosplit(path='../coco', weights=(0.9, 0.1, 0.0), annotated_only=False):
- """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files
- Usage: from utils.datasets import *; autosplit('../coco')
- Arguments
- path: Path to images directory
- weights: Train, val, test weights (list)
- annotated_only: Only use images with an annotated txt file
- """
- path = Path(path) # images dir
- files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in img_formats], []) # image files only
- n = len(files) # number of files
- indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split
-
- txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files
- [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing
-
- print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only)
- for i, img in tqdm(zip(indices, files), total=n):
- if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label
- with open(path / txt[i], 'a') as f:
- f.write(str(img) + '\n') # add image to txt file
-
-
-def load_segmentations(self, index):
- key = '/work/handsomejw66/coco17/' + self.img_files[index]
- #print(key)
- # /work/handsomejw66/coco17/
- return self.segs[key]
diff --git a/cv/detection/yolov7/pytorch/utils/general.py b/cv/detection/yolov7/pytorch/utils/general.py
deleted file mode 100644
index decdcc64ecd72927bc6c185683977854e593711d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/general.py
+++ /dev/null
@@ -1,892 +0,0 @@
-# YOLOR general utils
-
-import glob
-import logging
-import math
-import os
-import platform
-import random
-import re
-import subprocess
-import time
-from pathlib import Path
-
-import cv2
-import numpy as np
-import pandas as pd
-import torch
-import torchvision
-import yaml
-
-from utils.google_utils import gsutil_getsize
-from utils.metrics import fitness
-from utils.torch_utils import init_torch_seeds
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-def set_logging(rank=-1):
- logging.basicConfig(
- format="%(message)s",
- level=logging.INFO if rank in [-1, 0] else logging.WARN)
-
-
-def init_seeds(seed=0):
- # Initialize random number generator (RNG) seeds
- random.seed(seed)
- np.random.seed(seed)
- init_torch_seeds(seed)
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def isdocker():
- # Is environment a Docker container
- return Path('/workspace').exists() # or Path('/.dockerenv').exists()
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-def check_online():
- # Check internet connectivity
- import socket
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accesability
- return True
- except OSError:
- return False
-
-
-def check_git_status():
- # Recommend 'git pull' if code is out of date
- print(colorstr('github: '), end='')
- try:
- assert Path('.git').exists(), 'skipping check (not a git repository)'
- assert not isdocker(), 'skipping check (Docker image)'
- assert check_online(), 'skipping check (offline)'
-
- cmd = 'git fetch && git config --get remote.origin.url'
- url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url
- branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
- n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
- if n > 0:
- s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \
- f"Use 'git pull' to update or 'git clone {url}' to download latest."
- else:
- s = f'up to date with {url} ✅'
- print(emojis(s)) # emoji-safe
- except Exception as e:
- print(e)
-
-
-def check_requirements(requirements='requirements.txt', exclude=()):
- # Check installed dependencies meet requirements (pass *.txt file or list of packages)
- import pkg_resources as pkg
- prefix = colorstr('red', 'bold', 'requirements:')
- if isinstance(requirements, (str, Path)): # requirements.txt file
- file = Path(requirements)
- if not file.exists():
- print(f"{prefix} {file.resolve()} not found, check failed.")
- return
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude]
- else: # list or tuple of packages
- requirements = [x for x in requirements if x not in exclude]
-
- n = 0 # number of packages updates
- for r in requirements:
- try:
- pkg.require(r)
- except Exception as e: # DistributionNotFound or VersionConflict if requirements not met
- n += 1
- print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...")
- print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode())
-
- if n: # if packages updated
- source = file.resolve() if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- print(emojis(s)) # emoji-safe
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
- return new_size
-
-
-def check_imshow():
- # Check if environment supports image displays
- try:
- assert not isdocker(), 'cv2.imshow() is disabled in Docker environments'
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
- return False
-
-
-def check_file(file):
- # Search for file if not found
- if Path(file).is_file() or file == '':
- return file
- else:
- files = glob.glob('./**/' + file, recursive=True) # find file
- assert len(files), f'File Not Found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_dataset(dict):
- # Download dataset if not found locally
- val, s = dict.get('val'), dict.get('download')
- if val and len(val):
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])
- if s and len(s): # download script
- print('Downloading %s ...' % s)
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- torch.hub.download_url_to_file(s, f)
- r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip
- else: # bash script
- r = os.system(s)
- print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value
- else:
- raise Exception('Dataset not found.')
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(np.int32) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights)
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- class_counts = np.array([np.bincount(x[:, 0].astype(np.int32), minlength=nc) for x in labels])
- image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)
- # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample
- return image_weights
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- s = np.concatenate((s, s[0:1, :]), axis=0)
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
-
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-
-
-def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9):
- # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # change iou into pow(iou+eps)
- # iou = inter / union
- iou = torch.pow(inter/union + eps, alpha)
- # beta = 2 * alpha
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal
- rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2)
- rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2)
- rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha_ciou = v / ((1 + eps) - inter / union + v)
- # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU
- return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- # c_area = cw * ch + eps # convex area
- # return iou - (c_area - union) / c_area # GIoU
- c_area = torch.max(cw * ch + eps, union) # convex area
- return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU
- else:
- return iou # torch.log(iou+eps) or iou
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def wh_iou(wh1, wh2):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def box_giou(box1, box2):
- """
- Return generalized intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- areai = whi[:, :, 0] * whi[:, :, 1]
-
- return iou - (areai - union) / areai
-
-
-def box_ciou(box1, box2, eps: float = 1e-7):
- """
- Return complete intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- w_pred = box1[:, None, 2] - box1[:, None, 0]
- h_pred = box1[:, None, 3] - box1[:, None, 1]
-
- w_gt = box2[:, 2] - box2[:, 0]
- h_gt = box2[:, 3] - box2[:, 1]
-
- v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2)
- with torch.no_grad():
- alpha = v / (1 - iou + v + eps)
- return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v
-
-
-def box_diou(box1, box2, eps: float = 1e-7):
- """
- Return distance intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- # The distance IoU is the IoU penalized by a normalized
- # distance between boxes' centers squared.
- return iou - (centers_distance_squared / diagonal_distance_squared)
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=()):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- if nc == 1:
- x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=(), kpt_label=False, nc=None, nkpt=None):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
- if nc is None:
- nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- if not kpt_label:
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
- else:
- kpts = x[:, 6:]
- conf, j = x[:, 5:6].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres]
-
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
- # Print mutation results to evolve.txt (for use with train.py --evolve)
- a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys
- b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c))
-
- if bucket:
- url = 'gs://%s/evolve.txt' % bucket
- if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0):
- os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local
-
- with open('evolve.txt', 'a') as f: # append result
- f.write(c + b + '\n')
- x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows
- x = x[np.argsort(-fitness(x))] # sort
- np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness
-
- # Save yaml
- for i, k in enumerate(hyp.keys()):
- hyp[k] = float(x[0, i + 7])
- with open(yaml_file, 'w') as f:
- results = tuple(x[0, :7])
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n')
- yaml.dump(hyp, f, sort_keys=False)
-
- if bucket:
- os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload
-
-
-def apply_classifier(x, model, img, im0):
- # applies a second stage classifier to yolo outputs
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for j, a in enumerate(d): # per item
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
- # cv2.imwrite('test%i.jpg' % j, cutout)
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255.0 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def increment_path(path, exist_ok=True, sep=''):
- # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc.
- path = Path(path) # os-agnostic
- if (path.exists() and exist_ok) or (not path.exists()):
- return str(path)
- else:
- dirs = glob.glob(f"{path}{sep}*") # similar paths
- matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
- i = [int(m.groups()[0]) for m in matches if m] # indices
- n = max(i) + 1 if i else 2 # increment number
- return f"{path}{sep}{n}" # update path
diff --git a/cv/detection/yolov7/pytorch/utils/google_app_engine/Dockerfile b/cv/detection/yolov7/pytorch/utils/google_app_engine/Dockerfile
deleted file mode 100644
index 0155618f475104e9858b81470339558156c94e13..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/google_app_engine/Dockerfile
+++ /dev/null
@@ -1,25 +0,0 @@
-FROM gcr.io/google-appengine/python
-
-# Create a virtualenv for dependencies. This isolates these packages from
-# system-level packages.
-# Use -p python3 or -p python3.7 to select python version. Default is version 2.
-RUN virtualenv /env -p python3
-
-# Setting these environment variables are the same as running
-# source /env/bin/activate.
-ENV VIRTUAL_ENV /env
-ENV PATH /env/bin:$PATH
-
-RUN apt-get update && apt-get install -y python-opencv
-
-# Copy the application's requirements.txt and run pip to install all
-# dependencies into the virtualenv.
-ADD requirements.txt /app/requirements.txt
-RUN pip install -r /app/requirements.txt
-
-# Add the application source code.
-ADD . /app
-
-# Run a WSGI server to serve the application. gunicorn must be declared as
-# a dependency in requirements.txt.
-CMD gunicorn -b :$PORT main:app
diff --git a/cv/detection/yolov7/pytorch/utils/google_app_engine/additional_requirements.txt b/cv/detection/yolov7/pytorch/utils/google_app_engine/additional_requirements.txt
deleted file mode 100644
index 5fcc30524a59ca2d3356b07725df7e2b64f81422..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/google_app_engine/additional_requirements.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-# add these requirements in your app on top of the existing ones
-pip==18.1
-Flask==1.0.2
-gunicorn==19.9.0
diff --git a/cv/detection/yolov7/pytorch/utils/google_app_engine/app.yaml b/cv/detection/yolov7/pytorch/utils/google_app_engine/app.yaml
deleted file mode 100644
index 69b8f68b36a23eaa668699eb80b85ecdb17f9626..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/google_app_engine/app.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-runtime: custom
-env: flex
-
-service: yolorapp
-
-liveness_check:
- initial_delay_sec: 600
-
-manual_scaling:
- instances: 1
-resources:
- cpu: 1
- memory_gb: 4
- disk_size_gb: 20
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/utils/google_utils.py b/cv/detection/yolov7/pytorch/utils/google_utils.py
deleted file mode 100644
index f363408e63981702e63dcda189cbc2099d0a9499..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/google_utils.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Google utils: https://cloud.google.com/storage/docs/reference/libraries
-
-import os
-import platform
-import subprocess
-import time
-from pathlib import Path
-
-import requests
-import torch
-
-
-def gsutil_getsize(url=''):
- # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
- s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8')
- return eval(s.split(' ')[0]) if len(s) else 0 # bytes
-
-
-def attempt_download(file, repo='WongKinYiu/yolov7'):
- # Attempt file download if does not exist
- file = Path(str(file).strip().replace("'", '').lower())
-
- if not file.exists():
- try:
- response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api
- assets = [x['name'] for x in response['assets']] # release assets
- tag = response['tag_name'] # i.e. 'v1.0'
- except: # fallback plan
- assets = ['yolov7.pt', 'yolov7-tiny.pt', 'yolov7x.pt', 'yolov7-d6.pt', 'yolov7-e6.pt',
- 'yolov7-e6e.pt', 'yolov7-w6.pt']
- tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
-
- name = file.name
- if name in assets:
- msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/'
- redundant = False # second download option
- try: # GitHub
- url = f'https://github.com/{repo}/releases/download/{tag}/{name}'
- print(f'Downloading {url} to {file}...')
- torch.hub.download_url_to_file(url, file)
- assert file.exists() and file.stat().st_size > 1E6 # check
- except Exception as e: # GCP
- print(f'Download error: {e}')
- assert redundant, 'No secondary mirror'
- url = f'https://storage.googleapis.com/{repo}/ckpt/{name}'
- print(f'Downloading {url} to {file}...')
- os.system(f'curl -L {url} -o {file}') # torch.hub.download_url_to_file(url, weights)
- finally:
- if not file.exists() or file.stat().st_size < 1E6: # check
- file.unlink(missing_ok=True) # remove partial downloads
- print(f'ERROR: Download failure: {msg}')
- print('')
- return
-
-
-def gdrive_download(id='', file='tmp.zip'):
- # Downloads a file from Google Drive. from yolov7.utils.google_utils import *; gdrive_download()
- t = time.time()
- file = Path(file)
- cookie = Path('cookie') # gdrive cookie
- print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='')
- file.unlink(missing_ok=True) # remove existing file
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Attempt file download
- out = "NUL" if platform.system() == "Windows" else "/dev/null"
- os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}')
- if os.path.exists('cookie'): # large file
- s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}'
- else: # small file
- s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"'
- r = os.system(s) # execute, capture return
- cookie.unlink(missing_ok=True) # remove existing cookie
-
- # Error check
- if r != 0:
- file.unlink(missing_ok=True) # remove partial
- print('Download error ') # raise Exception('Download error')
- return r
-
- # Unzip if archive
- if file.suffix == '.zip':
- print('unzipping... ', end='')
- os.system(f'unzip -q {file}') # unzip
- file.unlink() # remove zip to free space
-
- print(f'Done ({time.time() - t:.1f}s)')
- return r
-
-
-def get_token(cookie="./cookie"):
- with open(cookie) as f:
- for line in f:
- if "download" in line:
- return line.split()[-1]
- return ""
-
-# def upload_blob(bucket_name, source_file_name, destination_blob_name):
-# # Uploads a file to a bucket
-# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
-#
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(destination_blob_name)
-#
-# blob.upload_from_filename(source_file_name)
-#
-# print('File {} uploaded to {}.'.format(
-# source_file_name,
-# destination_blob_name))
-#
-#
-# def download_blob(bucket_name, source_blob_name, destination_file_name):
-# # Uploads a blob from a bucket
-# storage_client = storage.Client()
-# bucket = storage_client.get_bucket(bucket_name)
-# blob = bucket.blob(source_blob_name)
-#
-# blob.download_to_filename(destination_file_name)
-#
-# print('Blob {} downloaded to {}.'.format(
-# source_blob_name,
-# destination_file_name))
diff --git a/cv/detection/yolov7/pytorch/utils/loss.py b/cv/detection/yolov7/pytorch/utils/loss.py
deleted file mode 100644
index 2b1d968f8fee4ae7822776c006cd9e05424f4286..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/loss.py
+++ /dev/null
@@ -1,1697 +0,0 @@
-# Loss functions
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from utils.general import bbox_iou, bbox_alpha_iou, box_iou, box_giou, box_diou, box_ciou, xywh2xyxy
-from utils.torch_utils import is_parallel
-
-
-def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
- # return positive, negative label smoothing BCE targets
- return 1.0 - 0.5 * eps, 0.5 * eps
-
-
-class BCEBlurWithLogitsLoss(nn.Module):
- # BCEwithLogitLoss() with reduced missing label effects.
- def __init__(self, alpha=0.05):
- super(BCEBlurWithLogitsLoss, self).__init__()
- self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
- self.alpha = alpha
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- pred = torch.sigmoid(pred) # prob from logits
- dx = pred - true # reduce only missing label effects
- # dx = (pred - true).abs() # reduce missing label and false label effects
- alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
- loss *= alpha_factor
- return loss.mean()
-
-
-class SigmoidBin(nn.Module):
- stride = None # strides computed during build
- export = False # onnx export
-
- def __init__(self, bin_count=10, min=0.0, max=1.0, reg_scale = 2.0, use_loss_regression=True, use_fw_regression=True, BCE_weight=1.0, smooth_eps=0.0):
- super(SigmoidBin, self).__init__()
-
- self.bin_count = bin_count
- self.length = bin_count + 1
- self.min = min
- self.max = max
- self.scale = float(max - min)
- self.shift = self.scale / 2.0
-
- self.use_loss_regression = use_loss_regression
- self.use_fw_regression = use_fw_regression
- self.reg_scale = reg_scale
- self.BCE_weight = BCE_weight
-
- start = min + (self.scale/2.0) / self.bin_count
- end = max - (self.scale/2.0) / self.bin_count
- step = self.scale / self.bin_count
- self.step = step
- #print(f" start = {start}, end = {end}, step = {step} ")
-
- bins = torch.range(start, end + 0.0001, step).float()
- self.register_buffer('bins', bins)
-
-
- self.cp = 1.0 - 0.5 * smooth_eps
- self.cn = 0.5 * smooth_eps
-
- self.BCEbins = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([BCE_weight]))
- self.MSELoss = nn.MSELoss()
-
- def get_length(self):
- return self.length
-
- def forward(self, pred):
- assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length)
-
- pred_reg = (pred[..., 0] * self.reg_scale - self.reg_scale/2.0) * self.step
- pred_bin = pred[..., 1:(1+self.bin_count)]
-
- _, bin_idx = torch.max(pred_bin, dim=-1)
- bin_bias = self.bins[bin_idx]
-
- if self.use_fw_regression:
- result = pred_reg + bin_bias
- else:
- result = bin_bias
- result = result.clamp(min=self.min, max=self.max)
-
- return result
-
-
- def training_loss(self, pred, target):
- assert pred.shape[-1] == self.length, 'pred.shape[-1]=%d is not equal to self.length=%d' % (pred.shape[-1], self.length)
- assert pred.shape[0] == target.shape[0], 'pred.shape=%d is not equal to the target.shape=%d' % (pred.shape[0], target.shape[0])
- device = pred.device
-
- pred_reg = (pred[..., 0].sigmoid() * self.reg_scale - self.reg_scale/2.0) * self.step
- pred_bin = pred[..., 1:(1+self.bin_count)]
-
- diff_bin_target = torch.abs(target[..., None] - self.bins)
- _, bin_idx = torch.min(diff_bin_target, dim=-1)
-
- bin_bias = self.bins[bin_idx]
- bin_bias.requires_grad = False
- result = pred_reg + bin_bias
-
- target_bins = torch.full_like(pred_bin, self.cn, device=device) # targets
- n = pred.shape[0]
- target_bins[range(n), bin_idx] = self.cp
-
- loss_bin = self.BCEbins(pred_bin, target_bins) # BCE
-
- if self.use_loss_regression:
- loss_regression = self.MSELoss(result, target) # MSE
- loss = loss_bin + loss_regression
- else:
- loss = loss_bin
-
- out_result = result.clamp(min=self.min, max=self.max)
-
- return loss, out_result
-
-
-class FocalLoss(nn.Module):
- # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(FocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- # p_t = torch.exp(-loss)
- # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
-
- # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
- pred_prob = torch.sigmoid(pred) # prob from logits
- p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = (1.0 - p_t) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class QFocalLoss(nn.Module):
- # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super(QFocalLoss, self).__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
-
- pred_prob = torch.sigmoid(pred) # prob from logits
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = torch.abs(true - pred_prob) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-class RankSort(torch.autograd.Function):
- @staticmethod
- def forward(ctx, logits, targets, delta_RS=0.50, eps=1e-10):
-
- classification_grads=torch.zeros(logits.shape).cuda()
-
- #Filter fg logits
- fg_labels = (targets > 0.)
- fg_logits = logits[fg_labels]
- fg_targets = targets[fg_labels]
- fg_num = len(fg_logits)
-
- #Do not use bg with scores less than minimum fg logit
- #since changing its score does not have an effect on precision
- threshold_logit = torch.min(fg_logits)-delta_RS
- relevant_bg_labels=((targets==0) & (logits>=threshold_logit))
-
- relevant_bg_logits = logits[relevant_bg_labels]
- relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
- sorting_error=torch.zeros(fg_num).cuda()
- ranking_error=torch.zeros(fg_num).cuda()
- fg_grad=torch.zeros(fg_num).cuda()
-
- #sort the fg logits
- order=torch.argsort(fg_logits)
- #Loops over each positive following the order
- for ii in order:
- # Difference Transforms (x_ij)
- fg_relations=fg_logits-fg_logits[ii]
- bg_relations=relevant_bg_logits-fg_logits[ii]
-
- if delta_RS > 0:
- fg_relations=torch.clamp(fg_relations/(2*delta_RS)+0.5,min=0,max=1)
- bg_relations=torch.clamp(bg_relations/(2*delta_RS)+0.5,min=0,max=1)
- else:
- fg_relations = (fg_relations >= 0).float()
- bg_relations = (bg_relations >= 0).float()
-
- # Rank of ii among pos and false positive number (bg with larger scores)
- rank_pos=torch.sum(fg_relations)
- FP_num=torch.sum(bg_relations)
-
- # Rank of ii among all examples
- rank=rank_pos+FP_num
-
- # Ranking error of example ii. target_ranking_error is always 0. (Eq. 7)
- ranking_error[ii]=FP_num/rank
-
- # Current sorting error of example ii. (Eq. 7)
- current_sorting_error = torch.sum(fg_relations*(1-fg_targets))/rank_pos
-
- #Find examples in the target sorted order for example ii
- iou_relations = (fg_targets >= fg_targets[ii])
- target_sorted_order = iou_relations * fg_relations
-
- #The rank of ii among positives in sorted order
- rank_pos_target = torch.sum(target_sorted_order)
-
- #Compute target sorting error. (Eq. 8)
- #Since target ranking error is 0, this is also total target error
- target_sorting_error= torch.sum(target_sorted_order*(1-fg_targets))/rank_pos_target
-
- #Compute sorting error on example ii
- sorting_error[ii] = current_sorting_error - target_sorting_error
-
- #Identity Update for Ranking Error
- if FP_num > eps:
- #For ii the update is the ranking error
- fg_grad[ii] -= ranking_error[ii]
- #For negatives, distribute error via ranking pmf (i.e. bg_relations/FP_num)
- relevant_bg_grad += (bg_relations*(ranking_error[ii]/FP_num))
-
- #Find the positives that are misranked (the cause of the error)
- #These are the ones with smaller IoU but larger logits
- missorted_examples = (~ iou_relations) * fg_relations
-
- #Denominotor of sorting pmf
- sorting_pmf_denom = torch.sum(missorted_examples)
-
- #Identity Update for Sorting Error
- if sorting_pmf_denom > eps:
- #For ii the update is the sorting error
- fg_grad[ii] -= sorting_error[ii]
- #For positives, distribute error via sorting pmf (i.e. missorted_examples/sorting_pmf_denom)
- fg_grad += (missorted_examples*(sorting_error[ii]/sorting_pmf_denom))
-
- #Normalize gradients by number of positives
- classification_grads[fg_labels]= (fg_grad/fg_num)
- classification_grads[relevant_bg_labels]= (relevant_bg_grad/fg_num)
-
- ctx.save_for_backward(classification_grads)
-
- return ranking_error.mean(), sorting_error.mean()
-
- @staticmethod
- def backward(ctx, out_grad1, out_grad2):
- g1, =ctx.saved_tensors
- return g1*out_grad1, None, None, None
-
-class aLRPLoss(torch.autograd.Function):
- @staticmethod
- def forward(ctx, logits, targets, regression_losses, delta=1., eps=1e-5):
- classification_grads=torch.zeros(logits.shape).cuda()
-
- #Filter fg logits
- fg_labels = (targets == 1)
- fg_logits = logits[fg_labels]
- fg_num = len(fg_logits)
-
- #Do not use bg with scores less than minimum fg logit
- #since changing its score does not have an effect on precision
- threshold_logit = torch.min(fg_logits)-delta
-
- #Get valid bg logits
- relevant_bg_labels=((targets==0)&(logits>=threshold_logit))
- relevant_bg_logits=logits[relevant_bg_labels]
- relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
- rank=torch.zeros(fg_num).cuda()
- prec=torch.zeros(fg_num).cuda()
- fg_grad=torch.zeros(fg_num).cuda()
-
- max_prec=0
- #sort the fg logits
- order=torch.argsort(fg_logits)
- #Loops over each positive following the order
- for ii in order:
- #x_ij s as score differences with fgs
- fg_relations=fg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with fgs
- fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1)
- #Discard i=j in the summation in rank_pos
- fg_relations[ii]=0
-
- #x_ij s as score differences with bgs
- bg_relations=relevant_bg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with bgs
- bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1)
-
- #Compute the rank of the example within fgs and number of bgs with larger scores
- rank_pos=1+torch.sum(fg_relations)
- FP_num=torch.sum(bg_relations)
- #Store the total since it is normalizer also for aLRP Regression error
- rank[ii]=rank_pos+FP_num
-
- #Compute precision for this example to compute classification loss
- prec[ii]=rank_pos/rank[ii]
- #For stability, set eps to a infinitesmall value (e.g. 1e-6), then compute grads
- if FP_num > eps:
- fg_grad[ii] = -(torch.sum(fg_relations*regression_losses)+FP_num)/rank[ii]
- relevant_bg_grad += (bg_relations*(-fg_grad[ii]/FP_num))
-
- #aLRP with grad formulation fg gradient
- classification_grads[fg_labels]= fg_grad
- #aLRP with grad formulation bg gradient
- classification_grads[relevant_bg_labels]= relevant_bg_grad
-
- classification_grads /= (fg_num)
-
- cls_loss=1-prec.mean()
- ctx.save_for_backward(classification_grads)
-
- return cls_loss, rank, order
-
- @staticmethod
- def backward(ctx, out_grad1, out_grad2, out_grad3):
- g1, =ctx.saved_tensors
- return g1*out_grad1, None, None, None, None
-
-
-class APLoss(torch.autograd.Function):
- @staticmethod
- def forward(ctx, logits, targets, delta=1.):
- classification_grads=torch.zeros(logits.shape).cuda()
-
- #Filter fg logits
- fg_labels = (targets == 1)
- fg_logits = logits[fg_labels]
- fg_num = len(fg_logits)
-
- #Do not use bg with scores less than minimum fg logit
- #since changing its score does not have an effect on precision
- threshold_logit = torch.min(fg_logits)-delta
-
- #Get valid bg logits
- relevant_bg_labels=((targets==0)&(logits>=threshold_logit))
- relevant_bg_logits=logits[relevant_bg_labels]
- relevant_bg_grad=torch.zeros(len(relevant_bg_logits)).cuda()
- rank=torch.zeros(fg_num).cuda()
- prec=torch.zeros(fg_num).cuda()
- fg_grad=torch.zeros(fg_num).cuda()
-
- max_prec=0
- #sort the fg logits
- order=torch.argsort(fg_logits)
- #Loops over each positive following the order
- for ii in order:
- #x_ij s as score differences with fgs
- fg_relations=fg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with fgs
- fg_relations=torch.clamp(fg_relations/(2*delta)+0.5,min=0,max=1)
- #Discard i=j in the summation in rank_pos
- fg_relations[ii]=0
-
- #x_ij s as score differences with bgs
- bg_relations=relevant_bg_logits-fg_logits[ii]
- #Apply piecewise linear function and determine relations with bgs
- bg_relations=torch.clamp(bg_relations/(2*delta)+0.5,min=0,max=1)
-
- #Compute the rank of the example within fgs and number of bgs with larger scores
- rank_pos=1+torch.sum(fg_relations)
- FP_num=torch.sum(bg_relations)
- #Store the total since it is normalizer also for aLRP Regression error
- rank[ii]=rank_pos+FP_num
-
- #Compute precision for this example
- current_prec=rank_pos/rank[ii]
-
- #Compute interpolated AP and store gradients for relevant bg examples
- if (max_prec<=current_prec):
- max_prec=current_prec
- relevant_bg_grad += (bg_relations/rank[ii])
- else:
- relevant_bg_grad += (bg_relations/rank[ii])*(((1-max_prec)/(1-current_prec)))
-
- #Store fg gradients
- fg_grad[ii]=-(1-max_prec)
- prec[ii]=max_prec
-
- #aLRP with grad formulation fg gradient
- classification_grads[fg_labels]= fg_grad
- #aLRP with grad formulation bg gradient
- classification_grads[relevant_bg_labels]= relevant_bg_grad
-
- classification_grads /= fg_num
-
- cls_loss=1-prec.mean()
- ctx.save_for_backward(classification_grads)
-
- return cls_loss
-
- @staticmethod
- def backward(ctx, out_grad1):
- g1, =ctx.saved_tensors
- return g1*out_grad1, None, None
-
-
-class ComputeLoss:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLoss, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.1, .05]) # P3-P7
- #self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.5, 0.4, .1]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors':
- setattr(self, k, getattr(det, k))
-
- def __call__(self, p, targets): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- pxy = ps[:, :2].sigmoid() * 2. - 0.5
- pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
- t[range(n), tcls[i]] = self.cp
- #t[t==self.cp] = iou.detach().clamp(0).type(t.dtype)
- lcls += self.BCEcls(ps[:, 5:], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., 4], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- tcls, tbox, indices, anch = [], [], [], []
- gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
- anch.append(anchors[a]) # anchors
- tcls.append(c) # class
-
- return tcls, tbox, indices, anch
-
-
-class ComputeLossOTA:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLossOTA, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors', 'stride':
- setattr(self, k, getattr(det, k))
-
- def __call__(self, p, targets, imgs): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
- pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p]
-
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- grid = torch.stack([gi, gj], dim=1)
- pxy = ps[:, :2].sigmoid() * 2. - 0.5
- #pxy = ps[:, :2].sigmoid() * 3. - 1.
- pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
- selected_tbox[:, :2] -= grid
- iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- selected_tcls = targets[i][:, 1].long()
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
- t[range(n), selected_tcls] = self.cp
- lcls += self.BCEcls(ps[:, 5:], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., 4], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets, imgs):
-
- #indices, anch = self.find_positive(p, targets)
- indices, anch = self.find_3_positive(p, targets)
- #indices, anch = self.find_4_positive(p, targets)
- #indices, anch = self.find_5_positive(p, targets)
- #indices, anch = self.find_9_positive(p, targets)
- device = torch.device(targets.device)
- matching_bs = [[] for pp in p]
- matching_as = [[] for pp in p]
- matching_gjs = [[] for pp in p]
- matching_gis = [[] for pp in p]
- matching_targets = [[] for pp in p]
- matching_anchs = [[] for pp in p]
-
- nl = len(p)
-
- for batch_idx in range(p[0].shape[0]):
-
- b_idx = targets[:, 0]==batch_idx
- this_target = targets[b_idx]
- if this_target.shape[0] == 0:
- continue
-
- txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
- txyxy = xywh2xyxy(txywh)
-
- pxyxys = []
- p_cls = []
- p_obj = []
- from_which_layer = []
- all_b = []
- all_a = []
- all_gj = []
- all_gi = []
- all_anch = []
-
- for i, pi in enumerate(p):
-
- b, a, gj, gi = indices[i]
- idx = (b == batch_idx)
- b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
- all_b.append(b)
- all_a.append(a)
- all_gj.append(gj)
- all_gi.append(gi)
- all_anch.append(anch[i][idx])
- from_which_layer.append((torch.ones(size=(len(b),)) * i).to(device))
-
- fg_pred = pi[b, a, gj, gi]
- p_obj.append(fg_pred[:, 4:5])
- p_cls.append(fg_pred[:, 5:])
-
- grid = torch.stack([gi, gj], dim=1)
- pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
- #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
- pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
- pxywh = torch.cat([pxy, pwh], dim=-1)
- pxyxy = xywh2xyxy(pxywh)
- pxyxys.append(pxyxy)
-
- pxyxys = torch.cat(pxyxys, dim=0)
- if pxyxys.shape[0] == 0:
- continue
- p_obj = torch.cat(p_obj, dim=0)
- p_cls = torch.cat(p_cls, dim=0)
- from_which_layer = torch.cat(from_which_layer, dim=0)
- all_b = torch.cat(all_b, dim=0)
- all_a = torch.cat(all_a, dim=0)
- all_gj = torch.cat(all_gj, dim=0)
- all_gi = torch.cat(all_gi, dim=0)
- all_anch = torch.cat(all_anch, dim=0)
-
- pair_wise_iou = box_iou(txyxy, pxyxys)
-
- pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
-
- top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1)
- dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
-
- gt_cls_per_image = (
- F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
- .float()
- .unsqueeze(1)
- .repeat(1, pxyxys.shape[0], 1)
- )
-
- num_gt = this_target.shape[0]
- cls_preds_ = (
- p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- )
-
- y = cls_preds_.sqrt_()
- pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
- torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
- ).sum(-1)
- del cls_preds_
-
- cost = (
- pair_wise_cls_loss
- + 3.0 * pair_wise_iou_loss
- )
-
- matching_matrix = torch.zeros_like(cost, device=device)
-
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
- )
- matching_matrix[gt_idx][pos_idx] = 1.0
-
- del top_k, dynamic_ks
- anchor_matching_gt = matching_matrix.sum(0)
- if (anchor_matching_gt > 1).sum() > 0:
- _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
- matching_matrix[:, anchor_matching_gt > 1] *= 0.0
- matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
- fg_mask_inboxes = (matching_matrix.sum(0) > 0.0).to(device)
- matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
-
- from_which_layer = from_which_layer[fg_mask_inboxes]
- all_b = all_b[fg_mask_inboxes]
- all_a = all_a[fg_mask_inboxes]
- all_gj = all_gj[fg_mask_inboxes]
- all_gi = all_gi[fg_mask_inboxes]
- all_anch = all_anch[fg_mask_inboxes]
-
- this_target = this_target[matched_gt_inds]
-
- for i in range(nl):
- layer_idx = from_which_layer == i
- matching_bs[i].append(all_b[layer_idx])
- matching_as[i].append(all_a[layer_idx])
- matching_gjs[i].append(all_gj[layer_idx])
- matching_gis[i].append(all_gi[layer_idx])
- matching_targets[i].append(this_target[layer_idx])
- matching_anchs[i].append(all_anch[layer_idx])
-
- for i in range(nl):
- if matching_targets[i] != []:
- matching_bs[i] = torch.cat(matching_bs[i], dim=0)
- matching_as[i] = torch.cat(matching_as[i], dim=0)
- matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
- matching_gis[i] = torch.cat(matching_gis[i], dim=0)
- matching_targets[i] = torch.cat(matching_targets[i], dim=0)
- matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
- else:
- matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
-
- return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
-
- def find_3_positive(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- indices, anch = [], []
- gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- anch.append(anchors[a]) # anchors
-
- return indices, anch
-
-
-class ComputeLossBinOTA:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLossBinOTA, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
- #MSEangle = nn.MSELoss().to(device)
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors', 'stride', 'bin_count':
- setattr(self, k, getattr(det, k))
-
- #xy_bin_sigmoid = SigmoidBin(bin_count=11, min=-0.5, max=1.5, use_loss_regression=False).to(device)
- wh_bin_sigmoid = SigmoidBin(bin_count=self.bin_count, min=0.0, max=4.0, use_loss_regression=False).to(device)
- #angle_bin_sigmoid = SigmoidBin(bin_count=31, min=-1.1, max=1.1, use_loss_regression=False).to(device)
- self.wh_bin_sigmoid = wh_bin_sigmoid
-
- def __call__(self, p, targets, imgs): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- bs, as_, gjs, gis, targets, anchors = self.build_targets(p, targets, imgs)
- pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p]
-
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
-
- obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2 # x,y, w-bce, h-bce # xy_bin_sigmoid.get_length()*2
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- grid = torch.stack([gi, gj], dim=1)
- selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
- selected_tbox[:, :2] -= grid
-
- #pxy = ps[:, :2].sigmoid() * 2. - 0.5
- ##pxy = ps[:, :2].sigmoid() * 3. - 1.
- #pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- #pbox = torch.cat((pxy, pwh), 1) # predicted box
-
- #x_loss, px = xy_bin_sigmoid.training_loss(ps[..., 0:12], tbox[i][..., 0])
- #y_loss, py = xy_bin_sigmoid.training_loss(ps[..., 12:24], tbox[i][..., 1])
- w_loss, pw = self.wh_bin_sigmoid.training_loss(ps[..., 2:(3+self.bin_count)], selected_tbox[..., 2] / anchors[i][..., 0])
- h_loss, ph = self.wh_bin_sigmoid.training_loss(ps[..., (3+self.bin_count):obj_idx], selected_tbox[..., 3] / anchors[i][..., 1])
-
- pw *= anchors[i][..., 0]
- ph *= anchors[i][..., 1]
-
- px = ps[:, 0].sigmoid() * 2. - 0.5
- py = ps[:, 1].sigmoid() * 2. - 0.5
-
- lbox += w_loss + h_loss # + x_loss + y_loss
-
- #print(f"\n px = {px.shape}, py = {py.shape}, pw = {pw.shape}, ph = {ph.shape} \n")
-
- pbox = torch.cat((px.unsqueeze(1), py.unsqueeze(1), pw.unsqueeze(1), ph.unsqueeze(1)), 1).to(device) # predicted box
-
-
-
-
- iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- selected_tcls = targets[i][:, 1].long()
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, (1+obj_idx):], self.cn, device=device) # targets
- t[range(n), selected_tcls] = self.cp
- lcls += self.BCEcls(ps[:, (1+obj_idx):], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., obj_idx], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets, imgs):
-
- #indices, anch = self.find_positive(p, targets)
- indices, anch = self.find_3_positive(p, targets)
- #indices, anch = self.find_4_positive(p, targets)
- #indices, anch = self.find_5_positive(p, targets)
- #indices, anch = self.find_9_positive(p, targets)
-
- matching_bs = [[] for pp in p]
- matching_as = [[] for pp in p]
- matching_gjs = [[] for pp in p]
- matching_gis = [[] for pp in p]
- matching_targets = [[] for pp in p]
- matching_anchs = [[] for pp in p]
-
- nl = len(p)
-
- for batch_idx in range(p[0].shape[0]):
-
- b_idx = targets[:, 0]==batch_idx
- this_target = targets[b_idx]
- if this_target.shape[0] == 0:
- continue
-
- txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
- txyxy = xywh2xyxy(txywh)
-
- pxyxys = []
- p_cls = []
- p_obj = []
- from_which_layer = []
- all_b = []
- all_a = []
- all_gj = []
- all_gi = []
- all_anch = []
-
- for i, pi in enumerate(p):
-
- obj_idx = self.wh_bin_sigmoid.get_length()*2 + 2
-
- b, a, gj, gi = indices[i]
- idx = (b == batch_idx)
- b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
- all_b.append(b)
- all_a.append(a)
- all_gj.append(gj)
- all_gi.append(gi)
- all_anch.append(anch[i][idx])
- from_which_layer.append(torch.ones(size=(len(b),)) * i)
-
- fg_pred = pi[b, a, gj, gi]
- p_obj.append(fg_pred[:, obj_idx:(obj_idx+1)])
- p_cls.append(fg_pred[:, (obj_idx+1):])
-
- grid = torch.stack([gi, gj], dim=1)
- pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
- #pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
- pw = self.wh_bin_sigmoid.forward(fg_pred[..., 2:(3+self.bin_count)].sigmoid()) * anch[i][idx][:, 0] * self.stride[i]
- ph = self.wh_bin_sigmoid.forward(fg_pred[..., (3+self.bin_count):obj_idx].sigmoid()) * anch[i][idx][:, 1] * self.stride[i]
-
- pxywh = torch.cat([pxy, pw.unsqueeze(1), ph.unsqueeze(1)], dim=-1)
- pxyxy = xywh2xyxy(pxywh)
- pxyxys.append(pxyxy)
-
- pxyxys = torch.cat(pxyxys, dim=0)
- if pxyxys.shape[0] == 0:
- continue
- p_obj = torch.cat(p_obj, dim=0)
- p_cls = torch.cat(p_cls, dim=0)
- from_which_layer = torch.cat(from_which_layer, dim=0)
- all_b = torch.cat(all_b, dim=0)
- all_a = torch.cat(all_a, dim=0)
- all_gj = torch.cat(all_gj, dim=0)
- all_gi = torch.cat(all_gi, dim=0)
- all_anch = torch.cat(all_anch, dim=0)
-
- pair_wise_iou = box_iou(txyxy, pxyxys)
-
- pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
-
- top_k, _ = torch.topk(pair_wise_iou, min(10, pair_wise_iou.shape[1]), dim=1)
- dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
-
- gt_cls_per_image = (
- F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
- .float()
- .unsqueeze(1)
- .repeat(1, pxyxys.shape[0], 1)
- )
-
- num_gt = this_target.shape[0]
- cls_preds_ = (
- p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- )
-
- y = cls_preds_.sqrt_()
- pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
- torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
- ).sum(-1)
- del cls_preds_
-
- cost = (
- pair_wise_cls_loss
- + 3.0 * pair_wise_iou_loss
- )
-
- matching_matrix = torch.zeros_like(cost)
-
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
- )
- matching_matrix[gt_idx][pos_idx] = 1.0
-
- del top_k, dynamic_ks
- anchor_matching_gt = matching_matrix.sum(0)
- if (anchor_matching_gt > 1).sum() > 0:
- _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
- matching_matrix[:, anchor_matching_gt > 1] *= 0.0
- matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
- fg_mask_inboxes = matching_matrix.sum(0) > 0.0
- matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
-
- from_which_layer = from_which_layer[fg_mask_inboxes]
- all_b = all_b[fg_mask_inboxes]
- all_a = all_a[fg_mask_inboxes]
- all_gj = all_gj[fg_mask_inboxes]
- all_gi = all_gi[fg_mask_inboxes]
- all_anch = all_anch[fg_mask_inboxes]
-
- this_target = this_target[matched_gt_inds]
-
- for i in range(nl):
- layer_idx = from_which_layer == i
- matching_bs[i].append(all_b[layer_idx])
- matching_as[i].append(all_a[layer_idx])
- matching_gjs[i].append(all_gj[layer_idx])
- matching_gis[i].append(all_gi[layer_idx])
- matching_targets[i].append(this_target[layer_idx])
- matching_anchs[i].append(all_anch[layer_idx])
-
- for i in range(nl):
- if matching_targets[i] != []:
- matching_bs[i] = torch.cat(matching_bs[i], dim=0)
- matching_as[i] = torch.cat(matching_as[i], dim=0)
- matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
- matching_gis[i] = torch.cat(matching_gis[i], dim=0)
- matching_targets[i] = torch.cat(matching_targets[i], dim=0)
- matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
- else:
- matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
-
- return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
-
- def find_3_positive(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- indices, anch = [], []
- gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- anch.append(anchors[a]) # anchors
-
- return indices, anch
-
-
-class ComputeLossAuxOTA:
- # Compute losses
- def __init__(self, model, autobalance=False):
- super(ComputeLossAuxOTA, self).__init__()
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7
- self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance
- for k in 'na', 'nc', 'nl', 'anchors', 'stride':
- setattr(self, k, getattr(det, k))
-
- def __call__(self, p, targets, imgs): # predictions, targets, model
- device = targets.device
- lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device)
- bs_aux, as_aux_, gjs_aux, gis_aux, targets_aux, anchors_aux = self.build_targets2(p[:self.nl], targets, imgs)
- bs, as_, gjs, gis, targets, anchors = self.build_targets(p[:self.nl], targets, imgs)
- pre_gen_gains_aux = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p[:self.nl]]
- pre_gen_gains = [torch.tensor(pp.shape, device=device)[[3, 2, 3, 2]] for pp in p[:self.nl]]
-
-
- # Losses
- for i in range(self.nl): # layer index, layer predictions
- pi = p[i]
- pi_aux = p[i+self.nl]
- b, a, gj, gi = bs[i], as_[i], gjs[i], gis[i] # image, anchor, gridy, gridx
- b_aux, a_aux, gj_aux, gi_aux = bs_aux[i], as_aux_[i], gjs_aux[i], gis_aux[i] # image, anchor, gridy, gridx
- tobj = torch.zeros_like(pi[..., 0], device=device) # target obj
- tobj_aux = torch.zeros_like(pi_aux[..., 0], device=device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- ps = pi[b, a, gj, gi] # prediction subset corresponding to targets
-
- # Regression
- grid = torch.stack([gi, gj], dim=1)
- pxy = ps[:, :2].sigmoid() * 2. - 0.5
- pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- selected_tbox = targets[i][:, 2:6] * pre_gen_gains[i]
- selected_tbox[:, :2] -= grid
- iou = bbox_iou(pbox.T, selected_tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio
-
- # Classification
- selected_tcls = targets[i][:, 1].long()
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets
- t[range(n), selected_tcls] = self.cp
- lcls += self.BCEcls(ps[:, 5:], t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- n_aux = b_aux.shape[0] # number of targets
- if n_aux:
- ps_aux = pi_aux[b_aux, a_aux, gj_aux, gi_aux] # prediction subset corresponding to targets
- grid_aux = torch.stack([gi_aux, gj_aux], dim=1)
- pxy_aux = ps_aux[:, :2].sigmoid() * 2. - 0.5
- #pxy_aux = ps_aux[:, :2].sigmoid() * 3. - 1.
- pwh_aux = (ps_aux[:, 2:4].sigmoid() * 2) ** 2 * anchors_aux[i]
- pbox_aux = torch.cat((pxy_aux, pwh_aux), 1) # predicted box
- selected_tbox_aux = targets_aux[i][:, 2:6] * pre_gen_gains_aux[i]
- selected_tbox_aux[:, :2] -= grid_aux
- iou_aux = bbox_iou(pbox_aux.T, selected_tbox_aux, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
- lbox += 0.25 * (1.0 - iou_aux).mean() # iou loss
-
- # Objectness
- tobj_aux[b_aux, a_aux, gj_aux, gi_aux] = (1.0 - self.gr) + self.gr * iou_aux.detach().clamp(0).type(tobj_aux.dtype) # iou ratio
-
- # Classification
- selected_tcls_aux = targets_aux[i][:, 1].long()
- if self.nc > 1: # cls loss (only if multiple classes)
- t_aux = torch.full_like(ps_aux[:, 5:], self.cn, device=device) # targets
- t_aux[range(n_aux), selected_tcls_aux] = self.cp
- lcls += 0.25 * self.BCEcls(ps_aux[:, 5:], t_aux) # BCE
-
- obji = self.BCEobj(pi[..., 4], tobj)
- obji_aux = self.BCEobj(pi_aux[..., 4], tobj_aux)
- lobj += obji * self.balance[i] + 0.25 * obji_aux * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- loss = lbox + lobj + lcls
- return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
-
- def build_targets(self, p, targets, imgs):
-
- indices, anch = self.find_3_positive(p, targets)
-
- matching_bs = [[] for pp in p]
- matching_as = [[] for pp in p]
- matching_gjs = [[] for pp in p]
- matching_gis = [[] for pp in p]
- matching_targets = [[] for pp in p]
- matching_anchs = [[] for pp in p]
-
- nl = len(p)
-
- for batch_idx in range(p[0].shape[0]):
-
- b_idx = targets[:, 0]==batch_idx
- this_target = targets[b_idx]
- if this_target.shape[0] == 0:
- continue
-
- txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
- txyxy = xywh2xyxy(txywh)
-
- pxyxys = []
- p_cls = []
- p_obj = []
- from_which_layer = []
- all_b = []
- all_a = []
- all_gj = []
- all_gi = []
- all_anch = []
-
- for i, pi in enumerate(p):
-
- b, a, gj, gi = indices[i]
- idx = (b == batch_idx)
- b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
- all_b.append(b)
- all_a.append(a)
- all_gj.append(gj)
- all_gi.append(gi)
- all_anch.append(anch[i][idx])
- from_which_layer.append(torch.ones(size=(len(b),)) * i)
-
- fg_pred = pi[b, a, gj, gi]
- p_obj.append(fg_pred[:, 4:5])
- p_cls.append(fg_pred[:, 5:])
-
- grid = torch.stack([gi, gj], dim=1)
- pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
- #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
- pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
- pxywh = torch.cat([pxy, pwh], dim=-1)
- pxyxy = xywh2xyxy(pxywh)
- pxyxys.append(pxyxy)
-
- pxyxys = torch.cat(pxyxys, dim=0)
- if pxyxys.shape[0] == 0:
- continue
- p_obj = torch.cat(p_obj, dim=0)
- p_cls = torch.cat(p_cls, dim=0)
- from_which_layer = torch.cat(from_which_layer, dim=0)
- all_b = torch.cat(all_b, dim=0)
- all_a = torch.cat(all_a, dim=0)
- all_gj = torch.cat(all_gj, dim=0)
- all_gi = torch.cat(all_gi, dim=0)
- all_anch = torch.cat(all_anch, dim=0)
-
- pair_wise_iou = box_iou(txyxy, pxyxys)
-
- pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
-
- top_k, _ = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1)
- dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
-
- gt_cls_per_image = (
- F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
- .float()
- .unsqueeze(1)
- .repeat(1, pxyxys.shape[0], 1)
- )
-
- num_gt = this_target.shape[0]
- cls_preds_ = (
- p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- )
-
- y = cls_preds_.sqrt_()
- pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
- torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
- ).sum(-1)
- del cls_preds_
-
- cost = (
- pair_wise_cls_loss
- + 3.0 * pair_wise_iou_loss
- )
-
- matching_matrix = torch.zeros_like(cost)
-
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
- )
- matching_matrix[gt_idx][pos_idx] = 1.0
-
- del top_k, dynamic_ks
- anchor_matching_gt = matching_matrix.sum(0)
- if (anchor_matching_gt > 1).sum() > 0:
- _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
- matching_matrix[:, anchor_matching_gt > 1] *= 0.0
- matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
- fg_mask_inboxes = matching_matrix.sum(0) > 0.0
- matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
-
- from_which_layer = from_which_layer[fg_mask_inboxes]
- all_b = all_b[fg_mask_inboxes]
- all_a = all_a[fg_mask_inboxes]
- all_gj = all_gj[fg_mask_inboxes]
- all_gi = all_gi[fg_mask_inboxes]
- all_anch = all_anch[fg_mask_inboxes]
-
- this_target = this_target[matched_gt_inds]
-
- for i in range(nl):
- layer_idx = from_which_layer == i
- matching_bs[i].append(all_b[layer_idx])
- matching_as[i].append(all_a[layer_idx])
- matching_gjs[i].append(all_gj[layer_idx])
- matching_gis[i].append(all_gi[layer_idx])
- matching_targets[i].append(this_target[layer_idx])
- matching_anchs[i].append(all_anch[layer_idx])
-
- for i in range(nl):
- if matching_targets[i] != []:
- matching_bs[i] = torch.cat(matching_bs[i], dim=0)
- matching_as[i] = torch.cat(matching_as[i], dim=0)
- matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
- matching_gis[i] = torch.cat(matching_gis[i], dim=0)
- matching_targets[i] = torch.cat(matching_targets[i], dim=0)
- matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
- else:
- matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
-
- return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
-
- def build_targets2(self, p, targets, imgs):
-
- indices, anch = self.find_5_positive(p, targets)
-
- matching_bs = [[] for pp in p]
- matching_as = [[] for pp in p]
- matching_gjs = [[] for pp in p]
- matching_gis = [[] for pp in p]
- matching_targets = [[] for pp in p]
- matching_anchs = [[] for pp in p]
-
- nl = len(p)
-
- for batch_idx in range(p[0].shape[0]):
-
- b_idx = targets[:, 0]==batch_idx
- this_target = targets[b_idx]
- if this_target.shape[0] == 0:
- continue
-
- txywh = this_target[:, 2:6] * imgs[batch_idx].shape[1]
- txyxy = xywh2xyxy(txywh)
-
- pxyxys = []
- p_cls = []
- p_obj = []
- from_which_layer = []
- all_b = []
- all_a = []
- all_gj = []
- all_gi = []
- all_anch = []
-
- for i, pi in enumerate(p):
-
- b, a, gj, gi = indices[i]
- idx = (b == batch_idx)
- b, a, gj, gi = b[idx], a[idx], gj[idx], gi[idx]
- all_b.append(b)
- all_a.append(a)
- all_gj.append(gj)
- all_gi.append(gi)
- all_anch.append(anch[i][idx])
- from_which_layer.append(torch.ones(size=(len(b),)) * i)
-
- fg_pred = pi[b, a, gj, gi]
- p_obj.append(fg_pred[:, 4:5])
- p_cls.append(fg_pred[:, 5:])
-
- grid = torch.stack([gi, gj], dim=1)
- pxy = (fg_pred[:, :2].sigmoid() * 2. - 0.5 + grid) * self.stride[i] #/ 8.
- #pxy = (fg_pred[:, :2].sigmoid() * 3. - 1. + grid) * self.stride[i]
- pwh = (fg_pred[:, 2:4].sigmoid() * 2) ** 2 * anch[i][idx] * self.stride[i] #/ 8.
- pxywh = torch.cat([pxy, pwh], dim=-1)
- pxyxy = xywh2xyxy(pxywh)
- pxyxys.append(pxyxy)
-
- pxyxys = torch.cat(pxyxys, dim=0)
- if pxyxys.shape[0] == 0:
- continue
- p_obj = torch.cat(p_obj, dim=0)
- p_cls = torch.cat(p_cls, dim=0)
- from_which_layer = torch.cat(from_which_layer, dim=0)
- all_b = torch.cat(all_b, dim=0)
- all_a = torch.cat(all_a, dim=0)
- all_gj = torch.cat(all_gj, dim=0)
- all_gi = torch.cat(all_gi, dim=0)
- all_anch = torch.cat(all_anch, dim=0)
-
- pair_wise_iou = box_iou(txyxy, pxyxys)
-
- pair_wise_iou_loss = -torch.log(pair_wise_iou + 1e-8)
-
- top_k, _ = torch.topk(pair_wise_iou, min(20, pair_wise_iou.shape[1]), dim=1)
- dynamic_ks = torch.clamp(top_k.sum(1).int(), min=1)
-
- gt_cls_per_image = (
- F.one_hot(this_target[:, 1].to(torch.int64), self.nc)
- .float()
- .unsqueeze(1)
- .repeat(1, pxyxys.shape[0], 1)
- )
-
- num_gt = this_target.shape[0]
- cls_preds_ = (
- p_cls.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- * p_obj.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()
- )
-
- y = cls_preds_.sqrt_()
- pair_wise_cls_loss = F.binary_cross_entropy_with_logits(
- torch.log(y/(1-y)) , gt_cls_per_image, reduction="none"
- ).sum(-1)
- del cls_preds_
-
- cost = (
- pair_wise_cls_loss
- + 3.0 * pair_wise_iou_loss
- )
-
- matching_matrix = torch.zeros_like(cost)
-
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False
- )
- matching_matrix[gt_idx][pos_idx] = 1.0
-
- del top_k, dynamic_ks
- anchor_matching_gt = matching_matrix.sum(0)
- if (anchor_matching_gt > 1).sum() > 0:
- _, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)
- matching_matrix[:, anchor_matching_gt > 1] *= 0.0
- matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0
- fg_mask_inboxes = matching_matrix.sum(0) > 0.0
- matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)
-
- from_which_layer = from_which_layer[fg_mask_inboxes]
- all_b = all_b[fg_mask_inboxes]
- all_a = all_a[fg_mask_inboxes]
- all_gj = all_gj[fg_mask_inboxes]
- all_gi = all_gi[fg_mask_inboxes]
- all_anch = all_anch[fg_mask_inboxes]
-
- this_target = this_target[matched_gt_inds]
-
- for i in range(nl):
- layer_idx = from_which_layer == i
- matching_bs[i].append(all_b[layer_idx])
- matching_as[i].append(all_a[layer_idx])
- matching_gjs[i].append(all_gj[layer_idx])
- matching_gis[i].append(all_gi[layer_idx])
- matching_targets[i].append(this_target[layer_idx])
- matching_anchs[i].append(all_anch[layer_idx])
-
- for i in range(nl):
- if matching_targets[i] != []:
- matching_bs[i] = torch.cat(matching_bs[i], dim=0)
- matching_as[i] = torch.cat(matching_as[i], dim=0)
- matching_gjs[i] = torch.cat(matching_gjs[i], dim=0)
- matching_gis[i] = torch.cat(matching_gis[i], dim=0)
- matching_targets[i] = torch.cat(matching_targets[i], dim=0)
- matching_anchs[i] = torch.cat(matching_anchs[i], dim=0)
- else:
- matching_bs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_as[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gjs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_gis[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_targets[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
- matching_anchs[i] = torch.tensor([], device='cuda:0', dtype=torch.int64)
-
- return matching_bs, matching_as, matching_gjs, matching_gis, matching_targets, matching_anchs
-
- def find_5_positive(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- indices, anch = [], []
- gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 1.0 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- anch.append(anchors[a]) # anchors
-
- return indices, anch
-
- def find_3_positive(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- indices, anch = [], []
- gain = torch.ones(7, device=targets.device).long() # normalized to gridspace gain
- ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor([[0, 0],
- [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ], device=targets.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors = self.anchors[i]
- gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain
- if nt:
- # Matches
- r = t[:, :, 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1. < g) & (gxy > 1.)).T
- l, m = ((gxi % 1. < g) & (gxi > 1.)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- b, c = t[:, :2].long().T # image, class
- gxy = t[:, 2:4] # grid xy
- gwh = t[:, 4:6] # grid wh
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid xy indices
-
- # Append
- a = t[:, 6].long() # anchor indices
- indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices
- anch.append(anchors[a]) # anchors
-
- return indices, anch
diff --git a/cv/detection/yolov7/pytorch/utils/metrics.py b/cv/detection/yolov7/pytorch/utils/metrics.py
deleted file mode 100644
index 6d2f53647529ab0fc52f2e69fe2571794b024c94..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/metrics.py
+++ /dev/null
@@ -1,227 +0,0 @@
-# Model validation metrics
-
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-
-from . import general
-
-
-def fitness(x):
- # Model fitness as a weighted combination of metrics
- w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
- return (x[:, :4] * w).sum(1)
-
-
-def ap_per_class(tp, conf, pred_cls, target_cls, v5_metric=False, plot=False, save_dir='.', names=()):
- """ Compute the average precision, given the recall and precision curves.
- Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
- # Arguments
- tp: True positives (nparray, nx1 or nx10).
- conf: Objectness value from 0-1 (nparray).
- pred_cls: Predicted object classes (nparray).
- target_cls: True object classes (nparray).
- plot: Plot precision-recall curve at mAP@0.5
- save_dir: Plot save directory
- # Returns
- The average precision as computed in py-faster-rcnn.
- """
-
- # Sort by objectness
- i = np.argsort(-conf)
- tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
-
- # Find unique classes
- unique_classes = np.unique(target_cls)
- nc = unique_classes.shape[0] # number of classes, number of detections
-
- # Create Precision-Recall curve and compute AP for each class
- px, py = np.linspace(0, 1, 1000), [] # for plotting
- ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000))
- for ci, c in enumerate(unique_classes):
- i = pred_cls == c
- n_l = (target_cls == c).sum() # number of labels
- n_p = i.sum() # number of predictions
-
- if n_p == 0 or n_l == 0:
- continue
- else:
- # Accumulate FPs and TPs
- fpc = (1 - tp[i]).cumsum(0)
- tpc = tp[i].cumsum(0)
-
- # Recall
- recall = tpc / (n_l + 1e-16) # recall curve
- r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases
-
- # Precision
- precision = tpc / (tpc + fpc) # precision curve
- p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score
-
- # AP from recall-precision curve
- for j in range(tp.shape[1]):
- ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j], v5_metric=v5_metric)
- if plot and j == 0:
- py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5
-
- # Compute F1 (harmonic mean of precision and recall)
- f1 = 2 * p * r / (p + r + 1e-16)
- if plot:
- plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names)
- plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1')
- plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision')
- plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall')
-
- i = f1.mean(0).argmax() # max F1 index
- return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32')
-
-
-def compute_ap(recall, precision, v5_metric=False):
- """ Compute the average precision, given the recall and precision curves
- # Arguments
- recall: The recall curve (list)
- precision: The precision curve (list)
- v5_metric: Assume maximum recall to be 1.0, as in YOLOv5, MMDetetion etc.
- # Returns
- Average precision, precision curve, recall curve
- """
-
- # Append sentinel values to beginning and end
- if v5_metric: # New YOLOv5 metric, same as MMDetection and Detectron2 repositories
- mrec = np.concatenate(([0.], recall, [1.0]))
- else: # Old YOLOv5 metric, i.e. default YOLOv7 metric
- mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01]))
- mpre = np.concatenate(([1.], precision, [0.]))
-
- # Compute the precision envelope
- mpre = np.flip(np.maximum.accumulate(np.flip(mpre)))
-
- # Integrate area under curve
- method = 'interp' # methods: 'continuous', 'interp'
- if method == 'interp':
- x = np.linspace(0, 1, 101) # 101-point interp (COCO)
- ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate
- else: # 'continuous'
- i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve
-
- return ap, mpre, mrec
-
-
-class ConfusionMatrix:
- # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
- def __init__(self, nc, conf=0.25, iou_thres=0.45):
- self.matrix = np.zeros((nc + 1, nc + 1))
- self.nc = nc # number of classes
- self.conf = conf
- self.iou_thres = iou_thres
-
- def process_batch(self, detections, labels):
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- detections (Array[N, 6]), x1, y1, x2, y2, conf, class
- labels (Array[M, 5]), class, x1, y1, x2, y2
- Returns:
- None, updates confusion matrix accordingly
- """
- detections = detections[detections[:, 4] > self.conf]
- gt_classes = labels[:, 0].int()
- detection_classes = detections[:, 5].int()
- iou = general.box_iou(labels[:, 1:], detections[:, :4])
-
- x = torch.where(iou > self.iou_thres)
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- else:
- matches = np.zeros((0, 3))
-
- n = matches.shape[0] > 0
- m0, m1, _ = matches.transpose().astype(np.int16)
- for i, gc in enumerate(gt_classes):
- j = m0 == i
- if n and sum(j) == 1:
- self.matrix[gc, detection_classes[m1[j]]] += 1 # correct
- else:
- self.matrix[self.nc, gc] += 1 # background FP
-
- if n:
- for i, dc in enumerate(detection_classes):
- if not any(m1 == i):
- self.matrix[dc, self.nc] += 1 # background FN
-
- def matrix(self):
- return self.matrix
-
- def plot(self, save_dir='', names=()):
- try:
- import seaborn as sn
-
- array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize
- array[array < 0.005] = np.nan # don't annotate (would appear as 0.00)
-
- fig = plt.figure(figsize=(12, 9), tight_layout=True)
- sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size
- labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels
- sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True,
- xticklabels=names + ['background FP'] if labels else "auto",
- yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1))
- fig.axes[0].set_xlabel('True')
- fig.axes[0].set_ylabel('Predicted')
- fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250)
- except Exception as e:
- pass
-
- def print(self):
- for i in range(self.nc + 1):
- print(' '.join(map(str, self.matrix[i])))
-
-
-# Plots ----------------------------------------------------------------------------------------------------------------
-
-def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()):
- # Precision-recall curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
- py = np.stack(py, axis=1)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py.T):
- ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision)
- else:
- ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision)
-
- ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean())
- ax.set_xlabel('Recall')
- ax.set_ylabel('Precision')
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
-
-
-def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'):
- # Metric-confidence curve
- fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
-
- if 0 < len(names) < 21: # display per-class legend if < 21 classes
- for i, y in enumerate(py):
- ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric)
- else:
- ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric)
-
- y = py.mean(0)
- ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}')
- ax.set_xlabel(xlabel)
- ax.set_ylabel(ylabel)
- ax.set_xlim(0, 1)
- ax.set_ylim(0, 1)
- plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
- fig.savefig(Path(save_dir), dpi=250)
diff --git a/cv/detection/yolov7/pytorch/utils/plots.py b/cv/detection/yolov7/pytorch/utils/plots.py
deleted file mode 100644
index fdd8d0e853deb228badeeed52fbbe5fb8eb10632..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/plots.py
+++ /dev/null
@@ -1,489 +0,0 @@
-# Plotting utils
-
-import glob
-import math
-import os
-import random
-from copy import copy
-from pathlib import Path
-
-import cv2
-import matplotlib
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-import seaborn as sns
-import torch
-import yaml
-from PIL import Image, ImageDraw, ImageFont
-from scipy.signal import butter, filtfilt
-
-from utils.general import xywh2xyxy, xyxy2xywh
-from utils.metrics import fitness
-
-# Settings
-matplotlib.rc('font', **{'size': 11})
-matplotlib.use('Agg') # for writing to files only
-
-
-def color_list():
- # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb
- def hex2rgb(h):
- return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))
-
- return [hex2rgb(h) for h in matplotlib.colors.TABLEAU_COLORS.values()] # or BASE_ (8), CSS4_ (148), XKCD_ (949)
-
-
-def hist2d(x, y, n=100):
- # 2d histogram used in labels.png and evolve.png
- xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n)
- hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges))
- xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1)
- yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1)
- return np.log(hist[xidx, yidx])
-
-
-def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5):
- # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy
- def butter_lowpass(cutoff, fs, order):
- nyq = 0.5 * fs
- normal_cutoff = cutoff / nyq
- return butter(order, normal_cutoff, btype='low', analog=False)
-
- b, a = butter_lowpass(cutoff, fs, order=order)
- return filtfilt(b, a, data) # forward-backward filter
-
-
-def plot_one_box(x, img, color=None, label=None, line_thickness=3):
- # Plots one bounding box on image img
- tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness
- color = color or [random.randint(0, 255) for _ in range(3)]
- c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
- cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
- if label:
- tf = max(tl - 1, 1) # font thickness
- t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
- c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3
- cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled
- cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)
-
-
-def plot_one_box_PIL(box, img, color=None, label=None, line_thickness=None):
- img = Image.fromarray(img)
- draw = ImageDraw.Draw(img)
- line_thickness = line_thickness or max(int(min(img.size) / 200), 2)
- draw.rectangle(box, width=line_thickness, outline=tuple(color)) # plot
- if label:
- fontsize = max(round(max(img.size) / 40), 12)
- font = ImageFont.truetype("Arial.ttf", fontsize)
- txt_width, txt_height = font.getsize(label)
- draw.rectangle([box[0], box[1] - txt_height + 4, box[0] + txt_width, box[1]], fill=tuple(color))
- draw.text((box[0], box[1] - txt_height + 1), label, fill=(255, 255, 255), font=font)
- return np.asarray(img)
-
-
-def plot_wh_methods(): # from utils.plots import *; plot_wh_methods()
- # Compares the two methods for width-height anchor multiplication
- # https://github.com/ultralytics/yolov3/issues/168
- x = np.arange(-4.0, 4.0, .1)
- ya = np.exp(x)
- yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2
-
- fig = plt.figure(figsize=(6, 3), tight_layout=True)
- plt.plot(x, ya, '.-', label='YOLOv3')
- plt.plot(x, yb ** 2, '.-', label='YOLOR ^2')
- plt.plot(x, yb ** 1.6, '.-', label='YOLOR ^1.6')
- plt.xlim(left=-4, right=4)
- plt.ylim(bottom=0, top=6)
- plt.xlabel('input')
- plt.ylabel('output')
- plt.grid()
- plt.legend()
- fig.savefig('comparison.png', dpi=200)
-
-
-def output_to_target(output):
- # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
- targets = []
- for i, o in enumerate(output):
- for *box, conf, cls in o.cpu().numpy():
- targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf])
- return np.array(targets)
-
-
-def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16):
- # Plot image grid with labels
-
- if isinstance(images, torch.Tensor):
- images = images.cpu().float().numpy()
- if isinstance(targets, torch.Tensor):
- targets = targets.cpu().numpy()
-
- # un-normalise
- if np.max(images[0]) <= 1:
- images *= 255
-
- tl = 3 # line thickness
- tf = max(tl - 1, 1) # font thickness
- bs, _, h, w = images.shape # batch size, _, height, width
- bs = min(bs, max_subplots) # limit plot images
- ns = np.ceil(bs ** 0.5) # number of subplots (square)
-
- # Check if we should resize
- scale_factor = max_size / max(h, w)
- if scale_factor < 1:
- h = math.ceil(scale_factor * h)
- w = math.ceil(scale_factor * w)
-
- colors = color_list() # list of colors
- mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
- for i, img in enumerate(images):
- if i == max_subplots: # if last batch has fewer images than we expect
- break
-
- block_x = int(w * (i // ns))
- block_y = int(h * (i % ns))
-
- img = img.transpose(1, 2, 0)
- if scale_factor < 1:
- img = cv2.resize(img, (w, h))
-
- mosaic[block_y:block_y + h, block_x:block_x + w, :] = img
- if len(targets) > 0:
- image_targets = targets[targets[:, 0] == i]
- boxes = xywh2xyxy(image_targets[:, 2:6]).T
- classes = image_targets[:, 1].astype('int')
- labels = image_targets.shape[1] == 6 # labels if no conf column
- conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred)
-
- if boxes.shape[1]:
- if boxes.max() <= 1.01: # if normalized with tolerance 0.01
- boxes[[0, 2]] *= w # scale to pixels
- boxes[[1, 3]] *= h
- elif scale_factor < 1: # absolute coords need scale if image scales
- boxes *= scale_factor
- boxes[[0, 2]] += block_x
- boxes[[1, 3]] += block_y
- for j, box in enumerate(boxes.T):
- cls = int(classes[j])
- color = colors[cls % len(colors)]
- cls = names[cls] if names else cls
- if labels or conf[j] > 0.25: # 0.25 conf thresh
- label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j])
- plot_one_box(box, mosaic, label=label, color=color, line_thickness=tl)
-
- # Draw image filename labels
- if paths:
- label = Path(paths[i]).name[:40] # trim to 40 char
- t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]
- cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf,
- lineType=cv2.LINE_AA)
-
- # Image border
- cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3)
-
- if fname:
- r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size
- mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA)
- # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save
- Image.fromarray(mosaic).save(fname) # PIL save
- return mosaic
-
-
-def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
- # Plot LR simulating training for full epochs
- optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals
- y = []
- for _ in range(epochs):
- scheduler.step()
- y.append(optimizer.param_groups[0]['lr'])
- plt.plot(y, '.-', label='LR')
- plt.xlabel('epoch')
- plt.ylabel('LR')
- plt.grid()
- plt.xlim(0, epochs)
- plt.ylim(0)
- plt.savefig(Path(save_dir) / 'LR.png', dpi=200)
- plt.close()
-
-
-def plot_test_txt(): # from utils.plots import *; plot_test()
- # Plot test.txt histograms
- x = np.loadtxt('test.txt', dtype=np.float32)
- box = xyxy2xywh(x[:, :4])
- cx, cy = box[:, 0], box[:, 1]
-
- fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True)
- ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0)
- ax.set_aspect('equal')
- plt.savefig('hist2d.png', dpi=300)
-
- fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True)
- ax[0].hist(cx, bins=600)
- ax[1].hist(cy, bins=600)
- plt.savefig('hist1d.png', dpi=200)
-
-
-def plot_targets_txt(): # from utils.plots import *; plot_targets_txt()
- # Plot targets.txt histograms
- x = np.loadtxt('targets.txt', dtype=np.float32).T
- s = ['x targets', 'y targets', 'width targets', 'height targets']
- fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)
- ax = ax.ravel()
- for i in range(4):
- ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std()))
- ax[i].legend()
- ax[i].set_title(s[i])
- plt.savefig('targets.jpg', dpi=200)
-
-
-def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt()
- # Plot study.txt generated by test.py
- fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)
- # ax = ax.ravel()
-
- fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
- # for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolor-p6', 'yolor-w6', 'yolor-e6', 'yolor-d6']]:
- for f in sorted(Path(path).glob('study*.txt')):
- y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
- x = np.arange(y.shape[1]) if x is None else np.array(x)
- s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)']
- # for i in range(7):
- # ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8)
- # ax[i].set_title(s[i])
-
- j = y[3].argmax() + 1
- ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8,
- label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
-
- ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
- 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')
-
- ax2.grid(alpha=0.2)
- ax2.set_yticks(np.arange(20, 60, 5))
- ax2.set_xlim(0, 57)
- ax2.set_ylim(30, 55)
- ax2.set_xlabel('GPU Speed (ms/img)')
- ax2.set_ylabel('COCO AP val')
- ax2.legend(loc='lower right')
- plt.savefig(str(Path(path).name) + '.png', dpi=300)
-
-
-def plot_labels(labels, names=(), save_dir=Path(''), loggers=None):
- # plot dataset labels
- print('Plotting labels... ')
- c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes
- nc = int(c.max() + 1) # number of classes
- colors = color_list()
- x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height'])
-
- # seaborn correlogram
- sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9))
- plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200)
- plt.close()
-
- # matplotlib labels
- matplotlib.use('svg') # faster
- ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()
- ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)
- ax[0].set_ylabel('instances')
- if 0 < len(names) < 30:
- ax[0].set_xticks(range(len(names)))
- ax[0].set_xticklabels(names, rotation=90, fontsize=10)
- else:
- ax[0].set_xlabel('classes')
- sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9)
- sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9)
-
- # rectangles
- labels[:, 1:3] = 0.5 # center
- labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000
- img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255)
- for cls, *box in labels[:1000]:
- ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot
- ax[1].imshow(img)
- ax[1].axis('off')
-
- for a in [0, 1, 2, 3]:
- for s in ['top', 'right', 'left', 'bottom']:
- ax[a].spines[s].set_visible(False)
-
- plt.savefig(save_dir / 'labels.jpg', dpi=200)
- matplotlib.use('Agg')
- plt.close()
-
- # loggers
- for k, v in loggers.items() or {}:
- if k == 'wandb' and v:
- v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}, commit=False)
-
-
-def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution()
- # Plot hyperparameter evolution results in evolve.txt
- with open(yaml_file) as f:
- hyp = yaml.load(f, Loader=yaml.SafeLoader)
- x = np.loadtxt('evolve.txt', ndmin=2)
- f = fitness(x)
- # weights = (f - f.min()) ** 2 # for weighted results
- plt.figure(figsize=(10, 12), tight_layout=True)
- matplotlib.rc('font', **{'size': 8})
- for i, (k, v) in enumerate(hyp.items()):
- y = x[:, i + 7]
- # mu = (y * weights).sum() / weights.sum() # best weighted result
- mu = y[f.argmax()] # best single result
- plt.subplot(6, 5, i + 1)
- plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none')
- plt.plot(mu, f.max(), 'k+', markersize=15)
- plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters
- if i % 5 != 0:
- plt.yticks([])
- print('%15s: %.3g' % (k, mu))
- plt.savefig('evolve.png', dpi=200)
- print('\nPlot saved as evolve.png')
-
-
-def profile_idetection(start=0, stop=0, labels=(), save_dir=''):
- # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection()
- ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel()
- s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS']
- files = list(Path(save_dir).glob('frames*.txt'))
- for fi, f in enumerate(files):
- try:
- results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows
- n = results.shape[1] # number of rows
- x = np.arange(start, min(stop, n) if stop else n)
- results = results[:, x]
- t = (results[0] - results[0].min()) # set t0=0s
- results[0] = x
- for i, a in enumerate(ax):
- if i < len(results):
- label = labels[fi] if len(labels) else f.stem.replace('frames_', '')
- a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5)
- a.set_title(s[i])
- a.set_xlabel('time (s)')
- # if fi == len(files) - 1:
- # a.set_ylim(bottom=0)
- for side in ['top', 'right']:
- a.spines[side].set_visible(False)
- else:
- a.remove()
- except Exception as e:
- print('Warning: Plotting error for %s; %s' % (f, e))
-
- ax[1].legend()
- plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200)
-
-
-def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay()
- # Plot training 'results*.txt', overlaying train and val losses
- s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends
- t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles
- for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')):
- results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T
- n = results.shape[1] # number of rows
- x = range(start, min(stop, n) if stop else n)
- fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True)
- ax = ax.ravel()
- for i in range(5):
- for j in [i, i + 5]:
- y = results[j, x]
- ax[i].plot(x, y, marker='.', label=s[j])
- # y_smooth = butter_lowpass_filtfilt(y)
- # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j])
-
- ax[i].set_title(t[i])
- ax[i].legend()
- ax[i].set_ylabel(f) if i == 0 else None # add filename
- fig.savefig(f.replace('.txt', '.png'), dpi=200)
-
-
-def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''):
- # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp')
- fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True)
- ax = ax.ravel()
- s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall',
- 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95']
- if bucket:
- # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id]
- files = ['results%g.txt' % x for x in id]
- c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id)
- os.system(c)
- else:
- files = list(Path(save_dir).glob('results*.txt'))
- assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir)
- for fi, f in enumerate(files):
- try:
- results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T
- n = results.shape[1] # number of rows
- x = range(start, min(stop, n) if stop else n)
- for i in range(10):
- y = results[i, x]
- if i in [0, 1, 2, 5, 6, 7]:
- y[y == 0] = np.nan # don't show zero loss values
- # y /= y[0] # normalize
- label = labels[fi] if len(labels) else f.stem
- ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8)
- ax[i].set_title(s[i])
- # if i in [5, 6, 7]: # share train and val loss y axes
- # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
- except Exception as e:
- print('Warning: Plotting error for %s; %s' % (f, e))
-
- ax[1].legend()
- fig.savefig(Path(save_dir) / 'results.png', dpi=200)
-
-
-def output_to_keypoint(output):
- # Convert model output to target format [batch_id, class_id, x, y, w, h, conf]
- targets = []
- for i, o in enumerate(output):
- kpts = o[:,6:]
- o = o[:,:6]
- for index, (*box, conf, cls) in enumerate(o.detach().cpu().numpy()):
- targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf, *list(kpts.detach().cpu().numpy()[index])])
- return np.array(targets)
-
-
-def plot_skeleton_kpts(im, kpts, steps, orig_shape=None):
- #Plot the skeleton and keypointsfor coco datatset
- palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102],
- [230, 230, 0], [255, 153, 255], [153, 204, 255],
- [255, 102, 255], [255, 51, 255], [102, 178, 255],
- [51, 153, 255], [255, 153, 153], [255, 102, 102],
- [255, 51, 51], [153, 255, 153], [102, 255, 102],
- [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0],
- [255, 255, 255]])
-
- skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12],
- [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3],
- [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]
-
- pose_limb_color = palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]]
- pose_kpt_color = palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]]
- radius = 5
- num_kpts = len(kpts) // steps
-
- for kid in range(num_kpts):
- r, g, b = pose_kpt_color[kid]
- x_coord, y_coord = kpts[steps * kid], kpts[steps * kid + 1]
- if not (x_coord % 640 == 0 or y_coord % 640 == 0):
- if steps == 3:
- conf = kpts[steps * kid + 2]
- if conf < 0.5:
- continue
- cv2.circle(im, (int(x_coord), int(y_coord)), radius, (int(r), int(g), int(b)), -1)
-
- for sk_id, sk in enumerate(skeleton):
- r, g, b = pose_limb_color[sk_id]
- pos1 = (int(kpts[(sk[0]-1)*steps]), int(kpts[(sk[0]-1)*steps+1]))
- pos2 = (int(kpts[(sk[1]-1)*steps]), int(kpts[(sk[1]-1)*steps+1]))
- if steps == 3:
- conf1 = kpts[(sk[0]-1)*steps+2]
- conf2 = kpts[(sk[1]-1)*steps+2]
- if conf1<0.5 or conf2<0.5:
- continue
- if pos1[0]%640 == 0 or pos1[1]%640==0 or pos1[0]<0 or pos1[1]<0:
- continue
- if pos2[0] % 640 == 0 or pos2[1] % 640 == 0 or pos2[0]<0 or pos2[1]<0:
- continue
- cv2.line(im, pos1, pos2, (int(r), int(g), int(b)), thickness=2)
diff --git a/cv/detection/yolov7/pytorch/utils/torch_utils.py b/cv/detection/yolov7/pytorch/utils/torch_utils.py
deleted file mode 100644
index 1e631b555508457a4944c11a479176463719c0e8..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/torch_utils.py
+++ /dev/null
@@ -1,374 +0,0 @@
-# YOLOR PyTorch utils
-
-import datetime
-import logging
-import math
-import os
-import platform
-import subprocess
-import time
-from contextlib import contextmanager
-from copy import deepcopy
-from pathlib import Path
-
-import torch
-import torch.backends.cudnn as cudnn
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-try:
- import thop # for FLOPS computation
-except ImportError:
- thop = None
-logger = logging.getLogger(__name__)
-
-
-@contextmanager
-def torch_distributed_zero_first(local_rank: int):
- """
- Decorator to make all processes in distributed training wait for each local_master to do something.
- """
- if local_rank not in [-1, 0]:
- torch.distributed.barrier()
- yield
- if local_rank == 0:
- torch.distributed.barrier()
-
-
-def init_torch_seeds(seed=0):
- # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
- torch.manual_seed(seed)
- if seed == 0: # slower, more reproducible
- cudnn.benchmark, cudnn.deterministic = False, True
- else: # faster, less reproducible
- cudnn.benchmark, cudnn.deterministic = True, False
-
-
-def date_modified(path=__file__):
- # return human-readable file modification date, i.e. '2021-3-26'
- t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime)
- return f'{t.year}-{t.month}-{t.day}'
-
-
-def git_describe(path=Path(__file__).parent): # path must be a directory
- # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe
- s = f'git -C {path} describe --tags --long --always'
- try:
- return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1]
- except subprocess.CalledProcessError as e:
- return '' # not a git repository
-
-
-def select_device(device='', batch_size=None):
- # device = 'cpu' or '0' or '0,1,2,3'
- s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string
- cpu = device.lower() == 'cpu'
- if cpu:
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False
- elif device: # non-cpu device requested
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
- assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability
-
- cuda = not cpu and torch.cuda.is_available()
- if cuda:
- n = torch.cuda.device_count()
- if n > 1 and batch_size: # check that batch_size is compatible with device_count
- assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}'
- space = ' ' * len(s)
- for i, d in enumerate(device.split(',') if device else range(n)):
- p = torch.cuda.get_device_properties(i)
- s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB
- else:
- s += 'CPU\n'
-
- logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe
- return torch.device('cuda:0' if cuda else 'cpu')
-
-
-def time_synchronized():
- # pytorch-accurate time
- if torch.cuda.is_available():
- torch.cuda.synchronize()
- return time.time()
-
-
-def profile(x, ops, n=100, device=None):
- # profile a pytorch module or list of modules. Example usage:
- # x = torch.randn(16, 3, 640, 640) # input
- # m1 = lambda x: x * torch.sigmoid(x)
- # m2 = nn.SiLU()
- # profile(x, [m1, m2], n=100) # profile speed over 100 iterations
-
- device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
- x = x.to(device)
- x.requires_grad = True
- print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
- print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}")
- for m in ops if isinstance(ops, list) else [ops]:
- m = m.to(device) if hasattr(m, 'to') else m # device
- m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type
- dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward
- try:
- flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS
- except:
- flops = 0
-
- for _ in range(n):
- t[0] = time_synchronized()
- y = m(x)
- t[1] = time_synchronized()
- try:
- _ = y.sum().backward()
- t[2] = time_synchronized()
- except: # no backward method
- t[2] = float('nan')
- dtf += (t[1] - t[0]) * 1000 / n # ms per op forward
- dtb += (t[2] - t[1]) * 1000 / n # ms per op backward
-
- s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list'
- s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list'
- p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters
- print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}')
-
-
-def is_parallel(model):
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
-
-
-def intersect_dicts(da, db, exclude=()):
- # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values
- return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape}
-
-
-def initialize_weights(model):
- for m in model.modules():
- t = type(m)
- if t is nn.Conv2d:
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif t is nn.BatchNorm2d:
- m.eps = 1e-3
- m.momentum = 0.03
- elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
- m.inplace = True
-
-
-def find_modules(model, mclass=nn.Conv2d):
- # Finds layer indices matching module class 'mclass'
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
-
-
-def sparsity(model):
- # Return global model sparsity
- a, b = 0., 0.
- for p in model.parameters():
- a += p.numel()
- b += (p == 0).sum()
- return b / a
-
-
-def prune(model, amount=0.3):
- # Prune model to requested global sparsity
- import torch.nn.utils.prune as prune
- print('Pruning model... ', end='')
- for name, m in model.named_modules():
- if isinstance(m, nn.Conv2d):
- prune.l1_unstructured(m, name='weight', amount=amount) # prune
- prune.remove(m, 'weight') # make permanent
- print(' %.3g global sparsity' % sparsity(model))
-
-
-def fuse_conv_and_bn(conv, bn):
- # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/
- fusedconv = nn.Conv2d(conv.in_channels,
- conv.out_channels,
- kernel_size=conv.kernel_size,
- stride=conv.stride,
- padding=conv.padding,
- groups=conv.groups,
- bias=True).requires_grad_(False).to(conv.weight.device)
-
- # prepare filters
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))
-
- # prepare spatial bias
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
-
- return fusedconv
-
-
-def model_info(model, verbose=False, img_size=640):
- # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320]
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
- if verbose:
- print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
- for i, (name, p) in enumerate(model.named_parameters()):
- name = name.replace('module_list.', '')
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
-
- try: # FLOPS
- from thop import profile
- stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
- img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input
- flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS
- img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float
- fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS
- except (ImportError, Exception):
- fs = ''
-
- logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}")
-
-
-def load_classifier(name='resnet101', n=2):
- # Loads a pretrained model reshaped to n-class output
- model = torchvision.models.__dict__[name](pretrained=True)
-
- # ResNet model properties
- # input_size = [3, 224, 224]
- # input_space = 'RGB'
- # input_range = [0, 1]
- # mean = [0.485, 0.456, 0.406]
- # std = [0.229, 0.224, 0.225]
-
- # Reshape output to n classes
- filters = model.fc.weight.shape[1]
- model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True)
- model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True)
- model.fc.out_features = n
- return model
-
-
-def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416)
- # scales img(bs,3,y,x) by ratio constrained to gs-multiple
- if ratio == 1.0:
- return img
- else:
- h, w = img.shape[2:]
- s = (int(h * ratio), int(w * ratio)) # new size
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
- if not same_shape: # pad/crop img
- h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
-
-
-def copy_attr(a, b, include=(), exclude=()):
- # Copy attributes from b to a, options to only include [...] and to exclude [...]
- for k, v in b.__dict__.items():
- if (len(include) and k not in include) or k.startswith('_') or k in exclude:
- continue
- else:
- setattr(a, k, v)
-
-
-class ModelEMA:
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
- Keep a moving average of everything in the model state_dict (parameters and buffers).
- This is intended to allow functionality like
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
- A smoothed version of the weights is necessary for some training schemes to perform well.
- This class is sensitive where it is initialized in the sequence of model init,
- GPU assignment and distributed training wrappers.
- """
-
- def __init__(self, model, decay=0.9999, updates=0):
- # Create EMA
- self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA
- # if next(model.parameters()).device.type != 'cpu':
- # self.ema.half() # FP16 EMA
- self.updates = updates # number of EMA updates
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
- for p in self.ema.parameters():
- p.requires_grad_(False)
-
- def update(self, model):
- # Update EMA parameters
- with torch.no_grad():
- self.updates += 1
- d = self.decay(self.updates)
-
- msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
- for k, v in self.ema.state_dict().items():
- if v.dtype.is_floating_point:
- v *= d
- v += (1. - d) * msd[k].detach()
-
- def update_attr(self, model, include=(), exclude=('process_group', 'reducer')):
- # Update EMA attributes
- copy_attr(self.ema, model, include, exclude)
-
-
-class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm):
- def _check_input_dim(self, input):
- # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc
- # is this method that is overwritten by the sub-class
- # This original goal of this method was for tensor sanity checks
- # If you're ok bypassing those sanity checks (eg. if you trust your inference
- # to provide the right dimensional inputs), then you can just use this method
- # for easy conversion from SyncBatchNorm
- # (unfortunately, SyncBatchNorm does not store the original class - if it did
- # we could return the one that was originally created)
- return
-
-def revert_sync_batchnorm(module):
- # this is very similar to the function that it is trying to revert:
- # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679
- module_output = module
- if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm):
- new_cls = BatchNormXd
- module_output = BatchNormXd(module.num_features,
- module.eps, module.momentum,
- module.affine,
- module.track_running_stats)
- if module.affine:
- with torch.no_grad():
- module_output.weight = module.weight
- module_output.bias = module.bias
- module_output.running_mean = module.running_mean
- module_output.running_var = module.running_var
- module_output.num_batches_tracked = module.num_batches_tracked
- if hasattr(module, "qconfig"):
- module_output.qconfig = module.qconfig
- for name, child in module.named_children():
- module_output.add_module(name, revert_sync_batchnorm(child))
- del module
- return module_output
-
-
-class TracedModel(nn.Module):
-
- def __init__(self, model=None, device=None, img_size=(640,640)):
- super(TracedModel, self).__init__()
-
- print(" Convert model to Traced-model... ")
- self.stride = model.stride
- self.names = model.names
- self.model = model
-
- self.model = revert_sync_batchnorm(self.model)
- self.model.to('cpu')
- self.model.eval()
-
- self.detect_layer = self.model.model[-1]
- self.model.traced = True
-
- rand_example = torch.rand(1, 3, img_size, img_size)
-
- traced_script_module = torch.jit.trace(self.model, rand_example, strict=False)
- #traced_script_module = torch.jit.script(self.model)
- traced_script_module.save("traced_model.pt")
- print(" traced_script_module saved! ")
- self.model = traced_script_module
- self.model.to(device)
- self.detect_layer.to(device)
- print(" model is traced! \n")
-
- def forward(self, x, augment=False, profile=False):
- out = self.model(x)
- out = self.detect_layer(out)
- return out
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/utils/wandb_logging/__init__.py b/cv/detection/yolov7/pytorch/utils/wandb_logging/__init__.py
deleted file mode 100644
index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/wandb_logging/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# init
\ No newline at end of file
diff --git a/cv/detection/yolov7/pytorch/utils/wandb_logging/log_dataset.py b/cv/detection/yolov7/pytorch/utils/wandb_logging/log_dataset.py
deleted file mode 100644
index 74cd6c6cd3b182572a6e5bec68de02a9bd0d552d..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/wandb_logging/log_dataset.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import argparse
-
-import yaml
-
-from wandb_utils import WandbLogger
-
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def create_dataset_artifact(opt):
- with open(opt.data) as f:
- data = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- logger = WandbLogger(opt, '', None, data, job_type='Dataset Creation')
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
- parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
- parser.add_argument('--project', type=str, default='YOLOR', help='name of W&B Project')
- opt = parser.parse_args()
- opt.resume = False # Explicitly disallow resume check for dataset upload job
-
- create_dataset_artifact(opt)
diff --git a/cv/detection/yolov7/pytorch/utils/wandb_logging/wandb_utils.py b/cv/detection/yolov7/pytorch/utils/wandb_logging/wandb_utils.py
deleted file mode 100644
index aec7c5f486f962b7b59198f40a1edb5a79824afe..0000000000000000000000000000000000000000
--- a/cv/detection/yolov7/pytorch/utils/wandb_logging/wandb_utils.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import json
-import sys
-from pathlib import Path
-
-import torch
-import yaml
-from tqdm import tqdm
-
-sys.path.append(str(Path(__file__).parent.parent.parent)) # add utils/ to path
-from utils.datasets import LoadImagesAndLabels
-from utils.datasets import img2label_paths
-from utils.general import colorstr, xywh2xyxy, check_dataset
-
-try:
- import wandb
- from wandb import init, finish
-except ImportError:
- wandb = None
-
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX):
- return from_string[len(prefix):]
-
-
-def check_wandb_config_file(data_config_file):
- wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path
- if Path(wandb_config).is_file():
- return wandb_config
- return data_config_file
-
-
-def get_run_info(run_path):
- run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX))
- run_id = run_path.stem
- project = run_path.parent.stem
- model_artifact_name = 'run_' + run_id + '_model'
- return run_id, project, model_artifact_name
-
-
-def check_wandb_resume(opt):
- process_wandb_config_ddp_mode(opt) if opt.global_rank not in [-1, 0] else None
- if isinstance(opt.resume, str):
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- if opt.global_rank not in [-1, 0]: # For resuming DDP runs
- run_id, project, model_artifact_name = get_run_info(opt.resume)
- api = wandb.Api()
- artifact = api.artifact(project + '/' + model_artifact_name + ':latest')
- modeldir = artifact.download()
- opt.weights = str(Path(modeldir) / "last.pt")
- return True
- return None
-
-
-def process_wandb_config_ddp_mode(opt):
- with open(opt.data) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- train_dir, val_dir = None, None
- if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias)
- train_dir = train_artifact.download()
- train_path = Path(train_dir) / 'data/images/'
- data_dict['train'] = str(train_path)
-
- if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX):
- api = wandb.Api()
- val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias)
- val_dir = val_artifact.download()
- val_path = Path(val_dir) / 'data/images/'
- data_dict['val'] = str(val_path)
- if train_dir or val_dir:
- ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml')
- with open(ddp_data_path, 'w') as f:
- yaml.dump(data_dict, f)
- opt.data = ddp_data_path
-
-
-class WandbLogger():
- def __init__(self, opt, name, run_id, data_dict, job_type='Training'):
- # Pre-training routine --
- self.job_type = job_type
- self.wandb, self.wandb_run, self.data_dict = wandb, None if not wandb else wandb.run, data_dict
- # It's more elegant to stick to 1 wandb.init call, but useful config data is overwritten in the WandbLogger's wandb.init call
- if isinstance(opt.resume, str): # checks resume from artifact
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- run_id, project, model_artifact_name = get_run_info(opt.resume)
- model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name
- assert wandb, 'install wandb to resume wandb runs'
- # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config
- self.wandb_run = wandb.init(id=run_id, project=project, resume='allow')
- opt.resume = model_artifact_name
- elif self.wandb:
- self.wandb_run = wandb.init(config=opt,
- resume="allow",
- project='YOLOR' if opt.project == 'runs/train' else Path(opt.project).stem,
- name=name,
- job_type=job_type,
- id=run_id) if not wandb.run else wandb.run
- if self.wandb_run:
- if self.job_type == 'Training':
- if not opt.resume:
- wandb_data_dict = self.check_and_upload_dataset(opt) if opt.upload_dataset else data_dict
- # Info useful for resuming from artifacts
- self.wandb_run.config.opt = vars(opt)
- self.wandb_run.config.data_dict = wandb_data_dict
- self.data_dict = self.setup_training(opt, data_dict)
- if self.job_type == 'Dataset Creation':
- self.data_dict = self.check_and_upload_dataset(opt)
- else:
- prefix = colorstr('wandb: ')
- print(f"{prefix}Install Weights & Biases for YOLOR logging with 'pip install wandb' (recommended)")
-
- def check_and_upload_dataset(self, opt):
- assert wandb, 'Install wandb to upload dataset'
- check_dataset(self.data_dict)
- config_path = self.log_dataset_artifact(opt.data,
- opt.single_cls,
- 'YOLOR' if opt.project == 'runs/train' else Path(opt.project).stem)
- print("Created dataset config file ", config_path)
- with open(config_path) as f:
- wandb_data_dict = yaml.load(f, Loader=yaml.SafeLoader)
- return wandb_data_dict
-
- def setup_training(self, opt, data_dict):
- self.log_dict, self.current_epoch, self.log_imgs = {}, 0, 16 # Logging Constants
- self.bbox_interval = opt.bbox_interval
- if isinstance(opt.resume, str):
- modeldir, _ = self.download_model_artifact(opt)
- if modeldir:
- self.weights = Path(modeldir) / "last.pt"
- config = self.wandb_run.config
- opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp = str(
- self.weights), config.save_period, config.total_batch_size, config.bbox_interval, config.epochs, \
- config.opt['hyp']
- data_dict = dict(self.wandb_run.config.data_dict) # eliminates the need for config file to resume
- if 'val_artifact' not in self.__dict__: # If --upload_dataset is set, use the existing artifact, don't download
- self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'),
- opt.artifact_alias)
- self.val_artifact_path, self.val_artifact = self.download_dataset_artifact(data_dict.get('val'),
- opt.artifact_alias)
- self.result_artifact, self.result_table, self.val_table, self.weights = None, None, None, None
- if self.train_artifact_path is not None:
- train_path = Path(self.train_artifact_path) / 'data/images/'
- data_dict['train'] = str(train_path)
- if self.val_artifact_path is not None:
- val_path = Path(self.val_artifact_path) / 'data/images/'
- data_dict['val'] = str(val_path)
- self.val_table = self.val_artifact.get("val")
- self.map_val_table_path()
- if self.val_artifact is not None:
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
- self.result_table = wandb.Table(["epoch", "id", "prediction", "avg_confidence"])
- if opt.bbox_interval == -1:
- self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1
- return data_dict
-
- def download_dataset_artifact(self, path, alias):
- if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX):
- dataset_artifact = wandb.use_artifact(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias)
- assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'"
- datadir = dataset_artifact.download()
- return datadir, dataset_artifact
- return None, None
-
- def download_model_artifact(self, opt):
- if opt.resume.startswith(WANDB_ARTIFACT_PREFIX):
- model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest")
- assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist'
- modeldir = model_artifact.download()
- epochs_trained = model_artifact.metadata.get('epochs_trained')
- total_epochs = model_artifact.metadata.get('total_epochs')
- assert epochs_trained < total_epochs, 'training to %g epochs is finished, nothing to resume.' % (
- total_epochs)
- return modeldir, model_artifact
- return None, None
-
- def log_model(self, path, opt, epoch, fitness_score, best_model=False):
- model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', type='model', metadata={
- 'original_url': str(path),
- 'epochs_trained': epoch + 1,
- 'save period': opt.save_period,
- 'project': opt.project,
- 'total_epochs': opt.epochs,
- 'fitness_score': fitness_score
- })
- model_artifact.add_file(str(path / 'last.pt'), name='last.pt')
- wandb.log_artifact(model_artifact,
- aliases=['latest', 'epoch ' + str(self.current_epoch), 'best' if best_model else ''])
- print("Saving model artifact on epoch ", epoch + 1)
-
- def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False):
- with open(data_file) as f:
- data = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names'])
- names = {k: v for k, v in enumerate(names)} # to index dictionary
- self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(
- data['train']), names, name='train') if data.get('train') else None
- self.val_artifact = self.create_dataset_table(LoadImagesAndLabels(
- data['val']), names, name='val') if data.get('val') else None
- if data.get('train'):
- data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train')
- if data.get('val'):
- data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val')
- path = data_file if overwrite_config else '_wandb.'.join(data_file.rsplit('.', 1)) # updated data.yaml path
- data.pop('download', None)
- with open(path, 'w') as f:
- yaml.dump(data, f)
-
- if self.job_type == 'Training': # builds correct artifact pipeline graph
- self.wandb_run.use_artifact(self.val_artifact)
- self.wandb_run.use_artifact(self.train_artifact)
- self.val_artifact.wait()
- self.val_table = self.val_artifact.get('val')
- self.map_val_table_path()
- else:
- self.wandb_run.log_artifact(self.train_artifact)
- self.wandb_run.log_artifact(self.val_artifact)
- return path
-
- def map_val_table_path(self):
- self.val_table_map = {}
- print("Mapping dataset")
- for i, data in enumerate(tqdm(self.val_table.data)):
- self.val_table_map[data[3]] = data[0]
-
- def create_dataset_table(self, dataset, class_to_id, name='dataset'):
- # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging
- artifact = wandb.Artifact(name=name, type="dataset")
- img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None
- img_files = tqdm(dataset.img_files) if not img_files else img_files
- for img_file in img_files:
- if Path(img_file).is_dir():
- artifact.add_dir(img_file, name='data/images')
- labels_path = 'labels'.join(dataset.path.rsplit('images', 1))
- artifact.add_dir(labels_path, name='data/labels')
- else:
- artifact.add_file(img_file, name='data/images/' + Path(img_file).name)
- label_file = Path(img2label_paths([img_file])[0])
- artifact.add_file(str(label_file),
- name='data/labels/' + label_file.name) if label_file.exists() else None
- table = wandb.Table(columns=["id", "train_image", "Classes", "name"])
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()])
- for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)):
- height, width = shapes[0]
- labels[:, 2:] = (xywh2xyxy(labels[:, 2:].view(-1, 4))) * torch.Tensor([width, height, width, height])
- box_data, img_classes = [], {}
- for cls, *xyxy in labels[:, 1:].tolist():
- cls = int(cls)
- box_data.append({"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
- "class_id": cls,
- "box_caption": "%s" % (class_to_id[cls]),
- "scores": {"acc": 1},
- "domain": "pixel"})
- img_classes[cls] = class_to_id[cls]
- boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space
- table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), json.dumps(img_classes),
- Path(paths).name)
- artifact.add(table, name)
- return artifact
-
- def log_training_progress(self, predn, path, names):
- if self.val_table and self.result_table:
- class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()])
- box_data = []
- total_conf = 0
- for *xyxy, conf, cls in predn.tolist():
- if conf >= 0.25:
- box_data.append(
- {"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": "%s %.3f" % (names[cls], conf),
- "scores": {"class_score": conf},
- "domain": "pixel"})
- total_conf = total_conf + conf
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- id = self.val_table_map[Path(path).name]
- self.result_table.add_data(self.current_epoch,
- id,
- wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set),
- total_conf / max(1, len(box_data))
- )
-
- def log(self, log_dict):
- if self.wandb_run:
- for key, value in log_dict.items():
- self.log_dict[key] = value
-
- def end_epoch(self, best_result=False):
- if self.wandb_run:
- wandb.log(self.log_dict)
- self.log_dict = {}
- if self.result_artifact:
- train_results = wandb.JoinedTable(self.val_table, self.result_table, "id")
- self.result_artifact.add(train_results, 'result')
- wandb.log_artifact(self.result_artifact, aliases=['latest', 'epoch ' + str(self.current_epoch),
- ('best' if best_result else '')])
- self.result_table = wandb.Table(["epoch", "id", "prediction", "avg_confidence"])
- self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation")
-
- def finish_run(self):
- if self.wandb_run:
- if self.log_dict:
- wandb.log(self.log_dict)
- wandb.run.finish()