Predator Classifier first public release

Hi everyone,
I’ve just released on github my take on using object detection to speed up the review of batches of trail camera images. Predator Classifier app I have another project in mind (currently in development) and along the way wanted to speed up the creation of an accurate detection model for classifying predators seen in trail camera images. So I have developed a tool that does an initial scan and files images (as copies) into folders relating to any animal detected. The user can then review these images and make final corrections to end up with a ‘correct’ filing of images and a csv summary (original filename, datetime of image capture if found, initial detection, corrected detection). This then gives some good data showing any weaknesses in the detection model for re-training new versions.

Please feel free to download (Releases v1.0.0 - Initial public release) and have a play. The layout is still a bit rough. I’ll probably have to make this more responsive to smaller screen sizes.

When running this on my pc with 3ghz processor and 32gb ram, it is scanning approx 1000 images in about 8-10 minutes. After this initial detection, it takes me about 15-20 minutes to then manually scan through the images again and make any corrections.

Would be happy for some feedback. And also if anyone is happy to share trail camera images of rare classes (kiwi, ferret) I’d be delighted to include these in future detection models for sharing.

The initial detection model is created on top of approximately 7000 images from about 60 trail cameras so a reasonable diversity of lighting, distance to bait station etc. Your own trail camera images may not be detected quite as well (depending on environment) but I will be releasing in the near future some instructions on how you could build up your own detection model specific to your trapping project and still be able to plug that in to the tool I am sharing now.

Cheers,
Hamish

5 Likes

Hi Hamish well done good to see someone working on something that can make a real difference, I have forwarded to a few people

1 Like

I think I have a few ferret images. How could I send them to you?

Hi @davo36,
Thanks for the offer! This would be a big help.
If not too much bother, could you please email to
hamish_maxwell@hotmail.com

For anyone interested in this kind of tech, I’m currently working on guidelines / instructions for how other trapping groups could build their own detection model which will perform much better for their own set up than my generic model. Where I’m trying to get to is that the generic model is improved to speed up other users workflow. There is free software to be used. For the 7000 images I’ve currently trained the detection model on, this took about 3 days, but this was from already filed / classified images.

Thanks for the contact.

I’ll be posting more on this subject in the near future.

Cheers
Hamish

1 Like

Ok, sent an email with a few pictures.

1 Like

I started an app in QGIS too. Because I was not happy with the TrapNZ camera records or the ZIP free app. Corax Classifier

I soon wanted to see if there is a standard way of recording camera traps so that we can aggregate and exchange data.
There is! It is called Camtrap DP. I hope you are using it.
More details. It would be great if TrapNZ worked towards supporting Camtrap DP and extending the attribute tables to handle more attributes such as sex, age, activity in a standard way.

While I was searching I found another Classifier Wildlife Insights (WI) that already does a fine job, heavily supported by Google for AI analysis of birds and animals. There are sites in New Zealand using it. This app is fully online and free to most people and organisations. After defining and setting up a project all that is required is to upload folders of your camera images.

I will download your app to try it but I have found that users are resistant to installing local apps.

It is very hard to do AI on rare species because there are not enough available images. Wildlife Insights discusses this in depth and estimates you need 1000 images of each species to even begin. So I concluded that since we will not have that number we will have to do them manually after the first AI pass to eliminate blanks, humans and get the classification into broad categories like mammal, bird etc.

Things have moved on in the hardware for camera traps. We now record short videos on a trigger which are even harder to analyse. WI does this well if you convert the videos into sequences. Say 1 second intervals. Then the sequence can be analysed automatically as a whole. It segments the images with bounding boxes of animals recognised. Unique animals are counted and a guess at what they are is added. You can then review and upgrade the species from a dropdown.

This is all very similar to my own app without the AI. I am now putting my effort into complementing WI with specific analysis for our project using the Camtrap DP standard.

1 Like

Hi @kimo thanks for those links. I hadn’t found either of those resources yet. Thanks for the links! Definitely I can see a big benefit in sharing detection models and images. I’ve learnt a few things along the way dipping my toes into this field. Firstly, diversity of images helps accuracy a lot. For example, the early detection models I started training were only built on images from a selection of cameras, e.g. once I had a couple of thousand rat images, I didn’t go to any of the other cameras for more rat images. This became an obvious weakness when running detection over images from other cameras. Really obvious rats were being missed in some cases. Second important learning was to include a lot of ‘empty’ images. Once again, from all cameras. A third learning was being thoughtful around what I labelled in images. For example, initially I would draw around a whole rat, tail included. The weakness this introduces is that over half of the rat image is just noise to the model that introduces more confusion. Just these 3 changes (sample across all cameras, include lots of no detection images, evaluate what you are classifying) gave me a huge jump in detection accuracy.
To that end, my thoughts at this stage are that trapping projects will get the best results with detection models trained on their own images, i.e. the model learns the context of classification within specific images.
The model I’m sharing now trained on 7,000 images took approximately 3 days of work to label. But this was on an already (manually) classified library of images. If I’d had to do this with no pre-classification it would have taken untold longer.
I’ll be sharing some instruction and code soon for others to look through in case they see any value in building their own detection models, or just want to start learning how to use this technology themselves.
I’ll be really interested to see how well the model I’m sharing can perform for others. I am getting hold of several hundred images shortly to test on (same locality, different cameras though).
I’ll go and check those links you’ve sent.
Thanks.

How interesting. We have a project to estimate the population of Buff-banded Rails. We don’t have many images. I have just spent a holiday on Great Barrier Island and saw them every day without getting a single image. By the time I had got out my phone they were gone. I also set up a camera but no luck. WI suggests creating additional synthetic image examples by pasting a bird into different backgrounds, scales and handedness for a bigger training set. In the end the number of rail sightings can be handled manually after screening. Note that Wildlife Insights builds the training model for me, all I have to do is add more examples. Segmentation is automatic in WI - an amazing breakthrough made open by Facebook SAM tools (Segment Anything Model). Not so interesting to us nerds, but more practical for ecology groups.
We will be making the images uploaded to WI public so they can be downloaded (together with the Camtrap DP metadata!) which would provide a training set.

1 Like