Categories
Autonomous Moth Trap

Time Lapse module for Autonomous Moth Trap

This is the sixth post in a series:

This post describes the amt_timelapse.py script used on the Raspberry Pi Zero to capture images for the Autonomous Moth Trap. This script makes use of common utility functions defined in amt_util.py.

The software is designed to control the components of the following circuit but can be configured to accommodate a number of alternatives.

Requirements

  • Turn on the moth light (LED light tube) and ring light at a scheduled time (or multiple scheduled times (e.g. several one-hour periods through the night)
  • For each operation period, take a specified number of timestamped images at regular intervals
  • Turn off the moth light and ring light after operation
  • Optionally use the status light to indicate operation and when images are captured
  • Optionally get temperature and humidity readings from a DHT22 (or DHT11) sensor and link these to each image
  • Save metadata documenting the settings associated with the each operation period
  • Support alternative arrangements of GPIO pins
  • Use a real-time clock to track time when not connectd to a network

Implementation

  1. Scheduling is controlled by crontab entries to run the script (“python3 /home/pi/amt_timelapse.py”) at defined times.
  2. amt_timelapse.py reads the amt_timelapse.json configuration file located in the current folder (or an alternative JSON configuration file supplied as the first command-line argument).
  3. amt_timelapse.json serves as a container for: the unit name; basic metadata on the unit (Raspberry Pi version, camera type, lighting options); control parameters for PiCamera; options to enable the temperature/humidity sensor and status light; destination folder for output; and options to override the default GPIO pins. This will be expanded to hold other metadata (coordinates, contact information) and could also transfer wifi settings, schedule setup parameters, etc.
  4. amt_timelapse.py creates a subfolder in the destination folder to receive the images and copies the JSON configuration file into the subfolder.
  5. The script then sets up the GPIO pins.
  6. The previous state of the status light (red/green/off) is remembered and the light is set to green if enabled, otherwise off.
  7. The moth light and ring light are switched on.
  8. If a temperature/humidity sensor is enabled, it is now powered up (unless it sensor VCC pin is connected directly to a 3.3V pin) and initialised.
  9. If specified, the script now sleeps for a specified number of seconds (since there is likely to be little point in imaging before insects can respond to the light).
  10. The camera is now enabled and set to preview.
  11. The script now captures a series of the specified number of images at the requested interval. (If the number is set to -1, the series is unbounded – this may be appropriate if the unit is intended to run until the battery fails.) Each image is named with a timestamp and with temperature and humidity if these are being sensed. If the status light is enabled, it is switched to red for each capture.
  12. The lights are now switched off.
  13. If required the temperature/humidity sensor is turned off and the status light is reset.
  14. Throughout, progress is written to a log file.

Notes

DHT11/DHT22

Many websites recommend use of the adafruit_dht package to read DHT11/DHT22 sensors. I was unable to get this successfully working on the Pi Zero – it consistently reported wiring issues or incomplete buffers, so I instead adopted a solution using pigpiod (see https://abyz.me.uk/rpi/pigpio/code/DHT.py). The daemon is likely to consume extra power, so I will evaluate whether only to start and stop it while capturing images.

Simplifying configuration and file access

For a future version, I intend to add an IP67 female USB 2.0 socket and another push button. This is to make it easier to transfer files to and from the system and to lower the technical threshold for potential users. My thinking is as follows:

  • The user can edit the JSON file (which could be extended to offer specify settings for crontab). A simple command-line or form-based interface could be offered on Windows/Mac/Unix to create/validate these files before copying them to a USB storage device.
  • The user plugs the USB storage device into the unit’s external USB socket.
  • The user pushes the extra button on the unit and triggers a script waiting for a GPIO interrupt.
  • If a USB storage device is detected, the script turns on the status light to indicate operation.
  • If the unit holds untransferred folders of images, the script transfers a copy to the USB storage device.
  • If the transfer is successful and all images are on the USB storage device, EITHER the script automatically deletes the on-board copies OR does so only in response to a configuration flag.
  • If the USB storage device includes JSON files, the script transfers these to the unit and makes any associated changes to crontab, etc.
  • The script writes a human-readable report/log to the USB storage device to document all stages and report the current configuration settings and schedule.
  • The script tuns off the status light.

This would make it possible to use the device and save images, etc. even in the absence of wifi (or using a Raspberry Pi Zero without wifi support).

Categories
Autonomous Moth Trap

Software components for Autonomous Moth Trap

This is the fifth post in a series:

A key goal for the Autonomous Moth Trap project is to make it easier to collect and manage time series data on insects using units that operate in the field.

The Danish project generously shared both the software that was deployed on their Raspberry Pi systems and the Python scripts they have developed for processing the resulting images.

I have used some of the code and many of the concepts from these scripts in my own system. I have created a new Github project (github.com/dhobern/AMT) to manage and share my code and other digital assets. I welcome review, bug fixes and reuse.

There are at least eight software components that should form the core for a software-data ecosystem for this trap and that may well apply to other related use cases. I have existing implementations for a number of these, although many improvements are possible. Others can follow as more images are collected.

The figure at the top of this post shows some of the relationships between these components. The two components to the left execute inside the trap (using the Raspberry Pi as the processor). The rest execute on a desktop computer or laptop (Windows/Mac/Unix).

The path indicated by the green arrows reflects my current focus and what I hope to achieve in the near future. This involves automated collection of images, software assistance in deriving a species list and minimum counts (plus a range of metadata) for each species and then publication as a sample event dataset to GBIF or other public platforms.

As the number of identified images increases for a given location or region, it will become possible to execute the path indicated by the orange arrows, building a training set from identified images and then training a machine learning model for image recognition. There is also potential to integrate machine learning into the image segmentation stage to improve classification of interesting and uninteresting objects and to enhance recognition of the same individual in multiple images.

Once a model has been trained and works, it will be possible to activate the path indicated by the purple arrows and automate much more of the process. Quality control will be important and there should probably be other links that verify the identifications and feed more identified images in to retrain the model.

The following are brief notes on each of the components indicated. More detail will be presented in subsequent posts.

Time Lapse Capture

I have a working version of this component, written in Python and controlled with a JSON configuration file. It controls the lights (moth light, ring light for illumination), a temperature/humidity sensor and the camera (interval and number of images, brightness, contrast, saturation, sharpness and JPEG quality). The output is a folder containing a series of images with timestamps and temperature/humidity readings in the filename (but I plan to add these readings to the EXIF too), along with a timestamped copy of the JSON configuration file (since this contains metadata that may be useful later). My Raspberry Pi Zero unit uses this component triggered as a cron job (or as multiple cron jobs at different times of the night).

Motion Capture

The version implemented by the Danish team uses the software developed by the Motion project, along with some small Python scripts to control lights, etc., all controlled via cron jobs. I have modified the scripts on my Raspberry Pi 4 unit to add temperature/humidity readings. I expect to expand my Time Lapse Capture component so it uses the Motion software as an alternative mode alongside Time Lapse. This will allow the configuration metadata to be largely identical for both options.

Segment Images

I have again worked from software developed by the Danish team but rewritten large sections to reflect my wishes. My version works on the folder produced by the Pi unit and then generates several derived products:

  • A CSV file listing all images and associated metadata for each (temperature, humidity, etc.)
  • A CSV file listing each “blob” of interest in any of these images, including coordinates, size, significant colours, an identifier for a “track” that represents a presumed repeated capture of the same individual across multiple images, etc.
  • A folder contain cropped images for each blob that appears or changes between images

I will also store a timestamped copy of the configuration settings for the image segmentation as part of each output data set.

Track Editor

I have written a Python GUI that shows all blobs from each track as thumbnails, allows these tracks to be split or merged, uninteresting tracks to be deleted and a species or higher taxon to be added as an identification for each track. The outputs are a local taxon dictionary (for assisting entry of identifications – this output grows over time) and a CSV file with the identifications for each track. Since the track identifiers are changed by this tool, it also writes an updated version of the blob CSV file.

Event Reporter

I have not yet implemented this component, but it will take the data from the image, blob and track CSV files and produce a derived CSV file with minimum counts for each species or taxon recorded during the night, packaging this (along with all metadata from the configuration files) as a sampling effect dataset (Darwin Core Archive or Frictionless Data) ready for publication to GBIF or other biodiversity data platforms.

Training Set Manager

Given the outputs from the Track Editor, it will be possible also to build lists of blob images (and associated metadata) for each species identified. These should be managed to allow selection of a good training dataset to build a machine learning model for species identification. As well as the image content, the metadata will have good information on size, movement, time of appearance, etc. which may improve the models.

Model Builder

The images in the training set can be used to develop a machine learning model to identify the same species in subsequent samples. Metadata from associated configuration files will be captured to assist future interpretation.

Species Recognition

The final component will be a module that runs the machine learning model and generates similar data to the Track Editor (but with additional metadata). These results can then be fed directly to the Event Reporter or (more likely, especially in early phases) into a validation process.

Categories
Autonomous Moth Trap

Autonomous Moth Trap Project

This project seeks to build an automated moth trap with a machine learning identification model for Canberra moths (and other insects) based on the design published earlier this year by Kim Bjerge and colleagues in the journal Sensors:

Bjerge, K.; Nielsen, J.B.; Sepstrup, M.V.; Helsing-Nielsen, F.; Høye, T.T. An Automated Light Trap to Monitor Moths (Lepidoptera) Using Computer Vision-Based Tracking and Deep LearningSensors 202121, 343. https://doi.org/10.3390/s21020343

I want to thank Kim Bjerge and Toke Thomas Høye for their generous assistance answering questions and sharing code.

This is the first post in a series:

Background

I started using a moth trap in the mid-nineties in the UK. At that time, I used a Heath trap with a 6W actinic tube and drew pictures of moths with crayons since I did not have a digital camera. Over the years, I’ve used a wide variety of moth traps and moth lights, including 15W actinic tubes and 125W MV lamps in various configurations, mainly from Anglian Lepidopterist Supplies in the UK, and the LepiLED lights from Dr Gunnar Brehm. Since 1999, I’ve used a series of different digital cameras to document the insects I’ve attracted (see Flickr and iNaturalist).

All of these efforts have in some small way contributed to knowledge of biodiversity. Observation records flow into the Global Biodiversity Information Facility and the Atlas of Living Australia. However, every one of these records is a response to random and non-standard circumstances. In one year, I may operate a light for many nights and photograph many insects. In another year, I may do much less.

This lack of standardisation is common throughout natural history and citizen science. In recent years, there has been growing concern at the apparent collapse in insect numbers in many regions of the world, but there are few places from which we have genuinely comparable data over long periods. We therefore cannot easily assess how significant the changes have been. We certainly have no scalable way to monitor insect populations and understand how composition and abundance varies across space and through time.

I have therefore been interested for a long time in ways that we could automate detection and monitoring for insects and other more cryptic groups. One obvious route is to build the tools for DNA-based monitoring. Hence my enthusiasm for Malaise trapping as a path to building the necessary reference libraries.

One other idea I have been keen to explore is relatively simple automation of imaging of insects coming to light. I sketched this (possibly self-explanatory) concept in July 2018, but never found the time to build it. It basically involves bright insect-attracting lights, on a timer so that they can leave the insects in peace before dawn, and (in my sketch) two cameras to image species landing on a vertical surface and on the ground below.

A sketch of a possible automated camera trap for moths, 31 July 2018

The purpose would just have unsupervised collection of photographs that I would then upload to iNaturalist or elsewhere with my manual identifications.

Design

Early in 2021, I came across the paper by Kim Bjerge and his colleagues on an automated trap to detect and recognise a subset of Danish noctuid moth species.

Figure 1 from Bjerge et al. 2021: The portable light trap with a light table, a white sheet, and UV light to attract live moths during night hours. The computer vision system consists of a light ring, a camera with a computer and electronics, and a powered junction box with a DC-DC converter.

Bjerge et al.’s design involves the following components:

  • Raspberry Pi 4 to control lights and webcam (with motion detection software)
  • 15W actinic tube operated with 12V DC (to attract moths)
  • A3 LED light table (as contrasting surface on which moths can rest)
  • Logitech BRIO 4K Ultra HD webcam
  • LED ring light (to illuminate light table from camera side)
  • A circuit that allows the Raspberry Pi to turn the moth light and the ring light on and off at scheduled times

The Raspberry Pi software includes Python scripts to turn the lights on and off (through one of the GPIO pins) and to run motion detection and the Motion imaging software (Motion Project at GitHub).

Images collected by the system can then be processed using the Python Moth Classification and Counting software developed by Kim Bjerge (MCC-trap at GitHub). As described in the Sensors paper, this detects blobs in the images, tracks movement of the (presumed) same blob between images, and uses a Convolutional Neural Network model to classify the blobs based on a training set. In the Danish experiment, the training set focused on a small number of common noctuoid moths (eight classes).

Construction

I used the following components to build my version of the trap:

I had not attempted any electronics since I was a young child with my father (other than some atrocious attempts to solder loose wires). The following resources helped me:

  • This YouTube Soldering Crash Course showed me everything I had got wrong with my earlier efforts at soldering – it’s actually all surprisingly easy
  • Charles Platt’s book, Make: Electronics, is a very helpful guide to the basics of using simple components in circuits

Some notes on issues I encountered or decisions I made:

  • The Huion light box, like perhaps all other models on sale, is turned on and off and dimmed or brightened with a touch sensor behind the glass. Whenever the unit is first powered on, the light is initially off. As I understand it, the Danish team turn the light box on manually when the trap is in use. The backing sheet on this Huion unit is attached with a tacky glue and can be peeled back. This makes it possible to desolder the wires that feed the LED strip and attach them directly to where the 12V DC input connects. This bypasses the touch sensor. The touch sensor in this model has negligible resistance when the light box is at its brightest, so no additional resistor is required when bypassing it.
  • The change to the light box allowed me to turn it on and off automatically in conjunction with the other lights. I included the second potentiometer so that I could vary its voltage below 12V. (The first potentiometer was already in the reference circuit to control the light ring.)
  • Bjerge et al. used a 12V battery to power their trap. Since I plan to operate the trap in areas with access to mains electricity, I used the laptop-style power supply listed above. I used the second DC-DC converter to power the Raspberry Pi at 5.1V from the same source. (The first converter was to provide 5V power for the light ring, but that circuit is off until the Raspberry Pi turns it on.)
  • Bjerge at al. included a 75 Ohm resistor to limit the voltage across the transistor and coil of the relay. Using the same resistor in my circuit with a mains DC supply that is doubtless at a higher voltage than a battery, the Raspberry Pi correctly activated the transistor through its GPIO pin, which caused the relay to close and power the lights. However, the relay remained closed once the GPIO pin was reset to 0V. Replacing the 75 Ohm resistor with 300 Ohm allowed the circuit to work as expected.
  • I wired everything inside the enclosure, with holes for the power cable, the cables for the three lights, the lens of the webcam and the ventilation grille (with two layers of the fly screen). The vent was to ensure that the Raspberry Pi does not overheat and seemed the simplest reasonably rain- and insect-proof solution I could find.
  • I printed a 3D model in ABS to hold the various components in place.
  • I mistakenly tried using a shorter USB cable to connect the camera to the Raspberry Pi. I used a USB 3.0 port but was stupid enough not to know that USB 3.0 cables are different from USB 2.0 ones. The short cable was not compatible and limited the camera to HD instead of Ultra HD 4K resolution.
  • It would have been better to have used an external knob and potentiometer to adjust the brightness of the light box (rather than the one I soldered into the circuit).
  • Using UV LEDs (as with the LepiLED) might be a simpler solution in place of the 15W tube. I may experiment with such an alternative.

Operation

Moth trap in operation

The webcam is positioned around 220 mm from the light table. It is configured to focus at this range and to use an exposure of around 130 ms. The light box is kept relatively dim simply to offer some contrast and most light comes from the ring light. I am using a white cotton pillowcase over the light table to provide an attractive landing surface for insects.

I’ve set the trap to come on automatically at 18:00 and turn off around 05:00. The cron settings are:

59 17 * * * python /home/pi/lighton.py
00 18 * * * motion
01 18 * * * /home/pi/setCamera.sh
00 05 * * * pkill motion
01 05 * * * python /home/pi/lightoff.py

At present I control the Raspberry Pi entirely through a PuTTY telnet connection and access the images by FTP through FileZilla. I plan to automate transfer of the images to a computer so they can be processed each day.

Right now, it’s winter in Canberra, often reaching below freezing for some of the night, so there are relatively few insects. I’ve uploaded some examples (including a number captured only at HD resolution) to iNaturalist.

The trap is now more or less ready for collecting what I hope will be many thousands of images in the warmer months.

Results

The MCC-trap software has not been trained for Australian moths, but this video is a test of how it works with the set of images collected overnight, 3/4 June 2021:

ArabaAMT-20210604

For the coming months, the trap will operate as a wildlife camera trap. I will segment the images, add identifications and build training sets for classifying the commoner species here. I’d like to get to the stage where I could even fully automate some records being submitted to iNaturalist.

As can be seen in the video, the out-of-the-box processing selects a fair number of background areas and misses some insects. It may be useful for me to spend some time training a model first to distinguish between things that interest me (insects and other animals) and things that don’t (shadows, blurs, etc.). This should simplify extraction of the segmented images for training an identification model.