Maximizing Pig Detection Accuracy with YOLO:  A Case Study in Precision and Efficiency

In this in-depth article, we delve into the journey of developing a highly accurate pig detection system using YOLOv8 (and briefly touching on YOLOv9), tailored for a controlled environment with a single pig per enclosure. We explore the significance of dataset curation, annotation choices, image resolution, and augmentation strategies for achieving exceptional performance. Additionally, we discuss hardware considerations for efficient training and deployment.

The Challenge of Pig Detection

Pigs are a vital part of research, and monitoring their behavior is crucial for animal welfare and operational efficiency. However, accurate pig detection in real-world environments presents numerous challenges:

  • Partial Occlusion: Pigs might be partially hidden by objects or other pigs, making them harder to identify.
  • Lighting Variations: Even in a controlled environment, lighting conditions can change throughout the day, impacting the model’s ability to detect pigs.
  • Data Scarcity: High-quality, annotated datasets of pig images are often not readily available.
  • Computation Constraints: Real-time or near-real-time detection often requires efficient models that run well on available hardware.

Dataset Collection and Annotation

The cornerstone of our project was building a meticulously curated dataset of pig images. We captured a large number of images under varying lighting conditions within the target deployment environment.

  • Polygon Annotations: Instead of using simple bounding boxes, we employed polygon annotations to precisely outline the contours of the pigs. This decision proved instrumental in handling partial occlusions and training the model to recognize the true shape of the pig.
  • Addressing Occlusions: We included numerous images with partial occlusions, teaching the model to identify pigs even when partly obscured.
  • Dataset Size: After eliminating duplicate images and those with annotation errors, we started with a dataset of approximately 5000 high-quality, annotated pig images.

Model Training and YOLOv8 Selection

We experimented with different YOLO architectures (v8 and v9) and training strategies, leading to several key insights:

  • YOLOv8: We settled on the YOLOv8 architecture for its performance and ease of use within the Ultralytics library.
  • Training from Scratch: Unexpectedly, training from scratch with random weights consistently outperformed models initialized with pre-trained weights. This highlighted the importance of specializing the model to our unique pig images and environment.
  • Image Resolution: After testing various image sizes, we found that 384×384 was a sweet spot, balancing detail preservation with computational efficiency.

Data Augmentation

Since we had a controlled environment, we focused on targeted augmentations that mirrored real-world variations:

  • Brightness Adjustments: To accommodate subtle changes in lighting.
  • Bounding Box Augmentations: Subtle shifts, scaling, and rotations were applied to the polygon annotations themselves. This helped the model become more robust to minor annotation imperfections and pig pose variations.

Achieving Remarkable Results

After fixing annotation errors and refining the model training with our 480×384 augmentation strategy, we achieved:

mAP (Mean Average Precision): 99.3%

Precision: 99.4%

Recall: 99.2%

These metrics indicate exceptional accuracy and a very low rate of false positives, which was a critical goal for our application.

Conclusion

At PicoTeam, our commitment and efficiency in lab animal behavior analysis is unwavering. By leveraging advanced technologies like YOLO and meticulous data curation, we are transforming research practices and enhancing the accuracy of behavioral monitoring. Our innovative solutions are designed to meet the unique needs of researchers, ensuring reliable and reproducible results in animal behavior studies.

Leave a Reply

Your email address will not be published.

This field is required.

You may use these <abbr title="HyperText Markup Language">html</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*This field is required.