Categories
Autonomous Moth Trap

Autonomous Moth Trap Image Pipeline

This is the seventh post in a series:

This post summarises the image/data pipeline currently in place for the image capture and processing from two Autonomous Moth Traps:

  • AMT-2: Raspberry Pi 4.0 unit with Logitech BRIO camera collecting images via motion detection, as in Danish design:
    • Positioned on ground in fixed location for 250 nights between 22 May 2021 and 6 May 2022, collecting 967,511 images
    • Later fitted to a wall so far for 417 nights between 7 August 2022 and 28 September 2023 (ongoing), collecting 1,124,600 images to date
  • AMT-Alpha: Raspberry Pi Zero unit with Raspberry Pi HQ camera and 6 mm lens collecting images via timelapse:
    • Positioned on ground at various locations for a total of 122 nights between 20 November 2021 and 28 September 2023, collecting 82,833 images

Videos made from the images collected by each trap can be seen at https://vimeo.com/user157042939.

Together these traps have collected 2,174,944 images. The pipeline discussed below has identified and extracted 20,209,207 features of interest (“blobs”). This post documents the current processing, but I am reviewing all steps and expect to begin applying machine learning tools shortly.

Python software and associated YAML configuration files can be accessed from the dhobern / AMT repository in Github. The motion detection software is from the Motion-Project / motion repository.

AMT-2 image capture

AMT-2 is triggered daily using the following crontab settings (shown for 29 September 2023):

# m h  dom mon dow   command
# Light Trap Moths
4 19 * * * python /home/pi/lighton.py
5 19 * * * motion
6 19 * * * /home/pi/setCamera.sh
5 3 * * * pkill motion
6 3 * * * python /home/pi/lightoff.py
7 3 * * * /home/pi/backup.sh
0 12 * * * python3 /home/pi/amt_crontab.py sunset+60 sunrise-120 480

These settings indicate that the trap will next execute the following series of actions:

  • 19:04 – turn on lights (high-power LEDs and ring-light)
  • 19:05 – begin motion detection
  • 19:06 – apply camera parameters once the camera is started
  • 03:05 – end motion detection (480 minutes after start)
  • 03:06 – turn off lights
  • 03:07 – copy images, configuration settings (YAML) and crontab and camera settings to staging folder for SFTP access
  • 12:00 – reset crontab times so motion detection on the next date will start an hour after sunset and end two hours before sunrise or after 480 minutes, whichever is earlier

The configuration settings for this trap are currently as follows:

event:
  basisofrecord: Machine observation
  coordinateUncertaintyInMeters: 2
  decimalLatitude: -35.264047
  decimalLongitude: 149.083427
  geodeticDatum: WGS84
  recordedby:
    email: dhobern@gmail.com
    name: Donald Hobern
    orcid: 0000-0001-6492-4016
provenance:
  capture:
    camera: Logitech BRIO 4K
    illumination: 10-inch ring light
    imageheight: 2160
    imagewidth: 3840
    mode: Motion
    operatingdistance: 250
    processor: Raspberry Pi 4
    unitname: AMT-2
    uvlight: High-power LED tube - 6 UV, 1 green, 1 blue, 1 white

The camera settings are the output from:

v4l2-ctl -d /dev/video0 --list-ctrls

The folder of images and metadata is automatically transferred over SFTP at 08:30 each data onto a (Windows 11) desktop machine for subsequent processing.

AMT-Alpha image capture

On startup, AMT-Alpha launches a Python script amt_modeselector.py. This awaits triggering via a push button on the outside of the unit. When this button is pushed, the script selects an action based on the position of a rotary switch which may in four states:

  • Automatic – script does nothing, assuming that a cron job is scheduled to start amt_timelapse.py at a specified time. This mode is for unattended use.
  • Manual – script immediately launches amt_timelapse.py.
  • Transfer – script runs amt_transfer.py. If an USB drive has been inserted in the external USB port, this then reads configuration options from /media/usb/AMT/amt_transfer.yaml and may transfer images, configuration files and logs onto to the USB drive and new configuration files or updated software onto the device.
  • Off – script triggers a soft shutdown of the device.

Regardless of whether image capture is triggered manually or via crontab, amt_transfer.py reads configuration settings specified in amt_settings.yaml (which overrides default values set for the unit in amt_unit.yaml and underlying default values for the software specified in amt_defaults.yaml). If a GPS sensor is attached, the unit inserts coordinates into the configuration metadata via a temporary YAML file amt_location.yaml. The complete final configuration is stored as an output file along with the images captured.

The following configuration file is from a run of AMT-Alpha on 28 September 2023. This included a 120-second delay before collecting images at 20-second intervals:

_configurationfiles:
- /home/pi/amt_defaults.yaml
- /home/pi/amt_unit.yaml
- /home/pi/amt_settings.yaml
- /home/pi/amt_location.yaml
event:
  basisofrecord: Machine observation
  coordinateTimestamp: '2023-09-28T19:03:11.621919+10:00'
  coordinateUncertaintyInMeters: 1
  decimalLatitude: -35.264043
  decimalLongitude: 149.08358
  geodeticDatum: WGS84
  lunarPhase: Full Moon
  recordedby:
    email: dhobern@gmail.com
    name: Donald Hobern
    orcid: 0000-0001-6492-4016
  sunriseTime: '2023-09-29T05:46:00+10:00'
  sunsetTime: '2023-09-28T18:04:00+10:00'
provenance:
  capture:
    awb_gains:
    - 2.8
    - 1.6
    awb_mode: 'off'
    brightness: 60
    camera: Raspberry Pi HQ + 6mm Wide Angle Lens
    contrast: 35
    envsensor: DHT22
    folder: /home/pi/AMT/
    gpioenvdata: 9
    gpioenvpower: 10
    gpiogpspower: 24
    gpiogreen: 25
    gpiolights: 26
    gpiomanualmode: 22
    gpiomodetrigger: 16
    gpiored: 7
    gpioshutdownmode: 17
    gpiotransfermode: 27
    gpssensor: BN220
    illumination: 10-inch ring light
    imageheight: 3040
    imagewidth: 4056
    initialdelay: 120
    interval: 20
    maximages: 720
    meter_mode: matrix
    mode: TimeLapse
    operatingdistance: 265
    processor: Raspberry Pi Zero W
    program: /home/pi/amt_modeselector.py
    quality: 50
    saturation: 0
    sharpness: 70
    transferimages: true
    trigger: Manual
    unitname: AMT-alpha
    uvlight: High-power LED tube - 4 UV, 1 green, 1 blue
    version: 0.9.2

Images and configuration files may be transferred for processing via a USB drive or SFTP.

Segmenting images

Images from both traps have been processed using SegmentImages.py, initially based on the published Danish code. This uses OpenCV to detect objects of interest (“blobs”) and then applies a cost calculation to determine which blobs are likely to represent the same insect in consecutive images.

The cost calculation is based on costs in five dimensions (calculated in amt_tracker.py).

  • Size – 0 if the two blobs have the same number of pixels, 1 if one blob is at least four times the size of the other, with linear interpolation for intermediate values
  • Distance – 0 if the centroids of the two blobs are within 25 pixels of one another, 0.01 if the two blobs overlap or their centroids are within 100 pixels, 0.02 if they are within 250 pixels, and in all other cases the distance divided by 4405 (as the maximum distance possible on the screen)
  • Color – crude comparison of similarity of colours in blobs. The pixels in each blob are assigned to one of eight cells in RGB colourspace (intensity less than or greater than 128 for each of the RGB components) identified as K for “black”, R for “red”, G for “green”, B for “blue”, C for”cyan”, M for “magenta”, Y for “yellow” and W for “white”. Blobs are then assigned a colour string including the letters for all cells including at least 2% of the pixels in the blob. A cost of 1/8 is then assigned for each colour letter associated with one blob and not the other.
  • Direction – 0 if this is interpreted to be the first or second detection of a species, otherwise 0 if the position is exactly aligned with the direction between the last two detections, 1 if the position is in exactly the reverse direction, with linear interpolation for intermediate angles.
  • Age – allowing for insects disappearing and reappearing within five consecutive images. 0 if the blobs are in consecutive images, 1 if last seen five images previously, with linear interpolation for intermediate ages.

These five costs are then assigned weights based on a subjective (slightly tested) assessment of their relative importance. The weights applied have generally been 4 for Size and for Distance, 2 for Direction and Colour and 1 for Age. This means that the weighted cost for assuming two blobs are related is a distance in a hypercube with sides measuring 4, 4, 2, 2 and 1 units, i.e. with a hypoteneuse length or maximum weight of sqrt(41). These are then normalised to the range 0 to 1. Only weighted costs below 0.25 are considered plausible redetections.

Blobs are then assigned to “tracks” (series of locations of the same presumed insect over multiple images) based on an effort to minimise total cost.

The Python code creates a data subfolder containing:

  • amt_image.csv – CSV list of all images captured by the unit, including date and time and associated temperature and humidity if these were collected.
  • amt_blob.csv – CSV list of all blobs, including source image, bounding box, size, cost calculations and other variables. Each record also includes a track identifier and a changed flag indicating whether the blob was new or altered compared to earlier images. A sample is included as the image at the top of this post.
  • blobs – a folder containing segmented JPEG images for all blobs with the changed flag set to True.

This process successfully links many blobs into tracks but is also prone to merge or confuse tracks when insects are very active. The weightings are arbitrary, and tuning the weights might improve the process. Tracks are only a convenience to simplify later stages in the process.

Editing tracks

Another Python program, TrackEditor.py, is used to edit the tracks and associate them with species (or higher taxon identifications). This is a crude Tkinter application that loads data from amt_blob.csv along with the associated blob images and presents these for review and identification. Results are written into amt_track.csv. This lists the tracks and associates them with the name for the associated taxon. The editor allows tracks to be split and merged, so it also rewrites amt_blob.csv with revised track identifiers for the blobs.

The following image shows the TrackEditor window for some of the insects recorded by AMT-2 on the night of 28/29 September 2023. Clearly, several insects have been combined into a single track with id 187 (the 103 in parentheses gives the mean length of the sides of the associated images). Similarly, tracks 220 and 225 are for the same insect and can be joined.

The available operations are:

  • Clicking on the first image in a track joins the track to the previous track.
  • Clicking on any other image splits the track into two tracks, with the second track beginning with the clicked image.
  • Clicking the link icons (to the right of the track identifiers) on any two tracks merges them into a single track.
  • A scientific name can be entered into the text field for each track – a taxon dictionary supports autocompletion.
  • The three letter codes assign common higher taxon names to the track (Insecta, Coleoptera, Diptera, Hymenoptera, Lepidoptera, Trichoptera, Hemiptera, Tortricidae, Oecophoridae, Formicidae and Araneae).
  • The first of the three icons opens a larger image view for the first image in the track with buttons to step through the track.
  • The second icon deletes the track.
  • The final icon opens a dialog allowing one or more images from the track to be selected and submitted as a new observation via the iNaturalist API.

The following image shows the result of clicking on the first image of track 225 to merge it with track 220 and the larger image view for track 194.

The following image shows the result of splitting and organising track 187 and of adding two species identifications.

To date, this editor has been used to label 15,719 tracks containing 369,319 segmented images for approximately 350 taxa. Many images are series with very little inter-frame variation. Many taxa are larger groupings such as Diptera or Larentiinae.

Next steps

Labeling tracks (and hence blobs) with identifications is time-consuming but should allow a rich training set to be prepared with images representing a large proportion of the local fauna.

Sufficient images may already have been tagged to support at least training a model to group insects into broad categories and discard images that do not clearly represent individual insects. The outputs from such a process could then speed preparation of species level training sets.

Categories
Autonomous Moth Trap

Autonomous Moth Trap Project

This project seeks to build an automated moth trap with a machine learning identification model for Canberra moths (and other insects) based on the design published earlier this year by Kim Bjerge and colleagues in the journal Sensors:

Bjerge, K.; Nielsen, J.B.; Sepstrup, M.V.; Helsing-Nielsen, F.; Høye, T.T. An Automated Light Trap to Monitor Moths (Lepidoptera) Using Computer Vision-Based Tracking and Deep LearningSensors 202121, 343. https://doi.org/10.3390/s21020343

I want to thank Kim Bjerge and Toke Thomas Høye for their generous assistance answering questions and sharing code.

This is the first post in a series:

Background

I started using a moth trap in the mid-nineties in the UK. At that time, I used a Heath trap with a 6W actinic tube and drew pictures of moths with crayons since I did not have a digital camera. Over the years, I’ve used a wide variety of moth traps and moth lights, including 15W actinic tubes and 125W MV lamps in various configurations, mainly from Anglian Lepidopterist Supplies in the UK, and the LepiLED lights from Dr Gunnar Brehm. Since 1999, I’ve used a series of different digital cameras to document the insects I’ve attracted (see Flickr and iNaturalist).

All of these efforts have in some small way contributed to knowledge of biodiversity. Observation records flow into the Global Biodiversity Information Facility and the Atlas of Living Australia. However, every one of these records is a response to random and non-standard circumstances. In one year, I may operate a light for many nights and photograph many insects. In another year, I may do much less.

This lack of standardisation is common throughout natural history and citizen science. In recent years, there has been growing concern at the apparent collapse in insect numbers in many regions of the world, but there are few places from which we have genuinely comparable data over long periods. We therefore cannot easily assess how significant the changes have been. We certainly have no scalable way to monitor insect populations and understand how composition and abundance varies across space and through time.

I have therefore been interested for a long time in ways that we could automate detection and monitoring for insects and other more cryptic groups. One obvious route is to build the tools for DNA-based monitoring. Hence my enthusiasm for Malaise trapping as a path to building the necessary reference libraries.

One other idea I have been keen to explore is relatively simple automation of imaging of insects coming to light. I sketched this (possibly self-explanatory) concept in July 2018, but never found the time to build it. It basically involves bright insect-attracting lights, on a timer so that they can leave the insects in peace before dawn, and (in my sketch) two cameras to image species landing on a vertical surface and on the ground below.

A sketch of a possible automated camera trap for moths, 31 July 2018

The purpose would just have unsupervised collection of photographs that I would then upload to iNaturalist or elsewhere with my manual identifications.

Design

Early in 2021, I came across the paper by Kim Bjerge and his colleagues on an automated trap to detect and recognise a subset of Danish noctuid moth species.

Figure 1 from Bjerge et al. 2021: The portable light trap with a light table, a white sheet, and UV light to attract live moths during night hours. The computer vision system consists of a light ring, a camera with a computer and electronics, and a powered junction box with a DC-DC converter.

Bjerge et al.’s design involves the following components:

  • Raspberry Pi 4 to control lights and webcam (with motion detection software)
  • 15W actinic tube operated with 12V DC (to attract moths)
  • A3 LED light table (as contrasting surface on which moths can rest)
  • Logitech BRIO 4K Ultra HD webcam
  • LED ring light (to illuminate light table from camera side)
  • A circuit that allows the Raspberry Pi to turn the moth light and the ring light on and off at scheduled times

The Raspberry Pi software includes Python scripts to turn the lights on and off (through one of the GPIO pins) and to run motion detection and the Motion imaging software (Motion Project at GitHub).

Images collected by the system can then be processed using the Python Moth Classification and Counting software developed by Kim Bjerge (MCC-trap at GitHub). As described in the Sensors paper, this detects blobs in the images, tracks movement of the (presumed) same blob between images, and uses a Convolutional Neural Network model to classify the blobs based on a training set. In the Danish experiment, the training set focused on a small number of common noctuoid moths (eight classes).

Construction

I used the following components to build my version of the trap:

I had not attempted any electronics since I was a young child with my father (other than some atrocious attempts to solder loose wires). The following resources helped me:

  • This YouTube Soldering Crash Course showed me everything I had got wrong with my earlier efforts at soldering – it’s actually all surprisingly easy
  • Charles Platt’s book, Make: Electronics, is a very helpful guide to the basics of using simple components in circuits

Some notes on issues I encountered or decisions I made:

  • The Huion light box, like perhaps all other models on sale, is turned on and off and dimmed or brightened with a touch sensor behind the glass. Whenever the unit is first powered on, the light is initially off. As I understand it, the Danish team turn the light box on manually when the trap is in use. The backing sheet on this Huion unit is attached with a tacky glue and can be peeled back. This makes it possible to desolder the wires that feed the LED strip and attach them directly to where the 12V DC input connects. This bypasses the touch sensor. The touch sensor in this model has negligible resistance when the light box is at its brightest, so no additional resistor is required when bypassing it.
  • The change to the light box allowed me to turn it on and off automatically in conjunction with the other lights. I included the second potentiometer so that I could vary its voltage below 12V. (The first potentiometer was already in the reference circuit to control the light ring.)
  • Bjerge et al. used a 12V battery to power their trap. Since I plan to operate the trap in areas with access to mains electricity, I used the laptop-style power supply listed above. I used the second DC-DC converter to power the Raspberry Pi at 5.1V from the same source. (The first converter was to provide 5V power for the light ring, but that circuit is off until the Raspberry Pi turns it on.)
  • Bjerge at al. included a 75 Ohm resistor to limit the voltage across the transistor and coil of the relay. Using the same resistor in my circuit with a mains DC supply that is doubtless at a higher voltage than a battery, the Raspberry Pi correctly activated the transistor through its GPIO pin, which caused the relay to close and power the lights. However, the relay remained closed once the GPIO pin was reset to 0V. Replacing the 75 Ohm resistor with 300 Ohm allowed the circuit to work as expected.
  • I wired everything inside the enclosure, with holes for the power cable, the cables for the three lights, the lens of the webcam and the ventilation grille (with two layers of the fly screen). The vent was to ensure that the Raspberry Pi does not overheat and seemed the simplest reasonably rain- and insect-proof solution I could find.
  • I printed a 3D model in ABS to hold the various components in place.
  • I mistakenly tried using a shorter USB cable to connect the camera to the Raspberry Pi. I used a USB 3.0 port but was stupid enough not to know that USB 3.0 cables are different from USB 2.0 ones. The short cable was not compatible and limited the camera to HD instead of Ultra HD 4K resolution.
  • It would have been better to have used an external knob and potentiometer to adjust the brightness of the light box (rather than the one I soldered into the circuit).
  • Using UV LEDs (as with the LepiLED) might be a simpler solution in place of the 15W tube. I may experiment with such an alternative.

Operation

Moth trap in operation

The webcam is positioned around 220 mm from the light table. It is configured to focus at this range and to use an exposure of around 130 ms. The light box is kept relatively dim simply to offer some contrast and most light comes from the ring light. I am using a white cotton pillowcase over the light table to provide an attractive landing surface for insects.

I’ve set the trap to come on automatically at 18:00 and turn off around 05:00. The cron settings are:

59 17 * * * python /home/pi/lighton.py
00 18 * * * motion
01 18 * * * /home/pi/setCamera.sh
00 05 * * * pkill motion
01 05 * * * python /home/pi/lightoff.py

At present I control the Raspberry Pi entirely through a PuTTY telnet connection and access the images by FTP through FileZilla. I plan to automate transfer of the images to a computer so they can be processed each day.

Right now, it’s winter in Canberra, often reaching below freezing for some of the night, so there are relatively few insects. I’ve uploaded some examples (including a number captured only at HD resolution) to iNaturalist.

The trap is now more or less ready for collecting what I hope will be many thousands of images in the warmer months.

Results

The MCC-trap software has not been trained for Australian moths, but this video is a test of how it works with the set of images collected overnight, 3/4 June 2021:

ArabaAMT-20210604

For the coming months, the trap will operate as a wildlife camera trap. I will segment the images, add identifications and build training sets for classifying the commoner species here. I’d like to get to the stage where I could even fully automate some records being submitted to iNaturalist.

As can be seen in the video, the out-of-the-box processing selects a fair number of background areas and misses some insects. It may be useful for me to spend some time training a model first to distinguish between things that interest me (insects and other animals) and things that don’t (shadows, blurs, etc.). This should simplify extraction of the segmented images for training an identification model.