Categories
Autonomous Moth Trap

Time Lapse module for Autonomous Moth Trap

This is the sixth post in a series:

This post describes the amt_timelapse.py script used on the Raspberry Pi Zero to capture images for the Autonomous Moth Trap. This script makes use of common utility functions defined in amt_util.py.

The software is designed to control the components of the following circuit but can be configured to accommodate a number of alternatives.

Requirements

  • Turn on the moth light (LED light tube) and ring light at a scheduled time (or multiple scheduled times (e.g. several one-hour periods through the night)
  • For each operation period, take a specified number of timestamped images at regular intervals
  • Turn off the moth light and ring light after operation
  • Optionally use the status light to indicate operation and when images are captured
  • Optionally get temperature and humidity readings from a DHT22 (or DHT11) sensor and link these to each image
  • Save metadata documenting the settings associated with the each operation period
  • Support alternative arrangements of GPIO pins
  • Use a real-time clock to track time when not connectd to a network

Implementation

  1. Scheduling is controlled by crontab entries to run the script (“python3 /home/pi/amt_timelapse.py”) at defined times.
  2. amt_timelapse.py reads the amt_timelapse.json configuration file located in the current folder (or an alternative JSON configuration file supplied as the first command-line argument).
  3. amt_timelapse.json serves as a container for: the unit name; basic metadata on the unit (Raspberry Pi version, camera type, lighting options); control parameters for PiCamera; options to enable the temperature/humidity sensor and status light; destination folder for output; and options to override the default GPIO pins. This will be expanded to hold other metadata (coordinates, contact information) and could also transfer wifi settings, schedule setup parameters, etc.
  4. amt_timelapse.py creates a subfolder in the destination folder to receive the images and copies the JSON configuration file into the subfolder.
  5. The script then sets up the GPIO pins.
  6. The previous state of the status light (red/green/off) is remembered and the light is set to green if enabled, otherwise off.
  7. The moth light and ring light are switched on.
  8. If a temperature/humidity sensor is enabled, it is now powered up (unless it sensor VCC pin is connected directly to a 3.3V pin) and initialised.
  9. If specified, the script now sleeps for a specified number of seconds (since there is likely to be little point in imaging before insects can respond to the light).
  10. The camera is now enabled and set to preview.
  11. The script now captures a series of the specified number of images at the requested interval. (If the number is set to -1, the series is unbounded – this may be appropriate if the unit is intended to run until the battery fails.) Each image is named with a timestamp and with temperature and humidity if these are being sensed. If the status light is enabled, it is switched to red for each capture.
  12. The lights are now switched off.
  13. If required the temperature/humidity sensor is turned off and the status light is reset.
  14. Throughout, progress is written to a log file.

Notes

DHT11/DHT22

Many websites recommend use of the adafruit_dht package to read DHT11/DHT22 sensors. I was unable to get this successfully working on the Pi Zero – it consistently reported wiring issues or incomplete buffers, so I instead adopted a solution using pigpiod (see https://abyz.me.uk/rpi/pigpio/code/DHT.py). The daemon is likely to consume extra power, so I will evaluate whether only to start and stop it while capturing images.

Simplifying configuration and file access

For a future version, I intend to add an IP67 female USB 2.0 socket and another push button. This is to make it easier to transfer files to and from the system and to lower the technical threshold for potential users. My thinking is as follows:

  • The user can edit the JSON file (which could be extended to offer specify settings for crontab). A simple command-line or form-based interface could be offered on Windows/Mac/Unix to create/validate these files before copying them to a USB storage device.
  • The user plugs the USB storage device into the unit’s external USB socket.
  • The user pushes the extra button on the unit and triggers a script waiting for a GPIO interrupt.
  • If a USB storage device is detected, the script turns on the status light to indicate operation.
  • If the unit holds untransferred folders of images, the script transfers a copy to the USB storage device.
  • If the transfer is successful and all images are on the USB storage device, EITHER the script automatically deletes the on-board copies OR does so only in response to a configuration flag.
  • If the USB storage device includes JSON files, the script transfers these to the unit and makes any associated changes to crontab, etc.
  • The script writes a human-readable report/log to the USB storage device to document all stages and report the current configuration settings and schedule.
  • The script tuns off the status light.

This would make it possible to use the device and save images, etc. even in the absence of wifi (or using a Raspberry Pi Zero without wifi support).

Categories
Autonomous Moth Trap

Software components for Autonomous Moth Trap

This is the fifth post in a series:

A key goal for the Autonomous Moth Trap project is to make it easier to collect and manage time series data on insects using units that operate in the field.

The Danish project generously shared both the software that was deployed on their Raspberry Pi systems and the Python scripts they have developed for processing the resulting images.

I have used some of the code and many of the concepts from these scripts in my own system. I have created a new Github project (github.com/dhobern/AMT) to manage and share my code and other digital assets. I welcome review, bug fixes and reuse.

There are at least eight software components that should form the core for a software-data ecosystem for this trap and that may well apply to other related use cases. I have existing implementations for a number of these, although many improvements are possible. Others can follow as more images are collected.

The figure at the top of this post shows some of the relationships between these components. The two components to the left execute inside the trap (using the Raspberry Pi as the processor). The rest execute on a desktop computer or laptop (Windows/Mac/Unix).

The path indicated by the green arrows reflects my current focus and what I hope to achieve in the near future. This involves automated collection of images, software assistance in deriving a species list and minimum counts (plus a range of metadata) for each species and then publication as a sample event dataset to GBIF or other public platforms.

As the number of identified images increases for a given location or region, it will become possible to execute the path indicated by the orange arrows, building a training set from identified images and then training a machine learning model for image recognition. There is also potential to integrate machine learning into the image segmentation stage to improve classification of interesting and uninteresting objects and to enhance recognition of the same individual in multiple images.

Once a model has been trained and works, it will be possible to activate the path indicated by the purple arrows and automate much more of the process. Quality control will be important and there should probably be other links that verify the identifications and feed more identified images in to retrain the model.

The following are brief notes on each of the components indicated. More detail will be presented in subsequent posts.

Time Lapse Capture

I have a working version of this component, written in Python and controlled with a JSON configuration file. It controls the lights (moth light, ring light for illumination), a temperature/humidity sensor and the camera (interval and number of images, brightness, contrast, saturation, sharpness and JPEG quality). The output is a folder containing a series of images with timestamps and temperature/humidity readings in the filename (but I plan to add these readings to the EXIF too), along with a timestamped copy of the JSON configuration file (since this contains metadata that may be useful later). My Raspberry Pi Zero unit uses this component triggered as a cron job (or as multiple cron jobs at different times of the night).

Motion Capture

The version implemented by the Danish team uses the software developed by the Motion project, along with some small Python scripts to control lights, etc., all controlled via cron jobs. I have modified the scripts on my Raspberry Pi 4 unit to add temperature/humidity readings. I expect to expand my Time Lapse Capture component so it uses the Motion software as an alternative mode alongside Time Lapse. This will allow the configuration metadata to be largely identical for both options.

Segment Images

I have again worked from software developed by the Danish team but rewritten large sections to reflect my wishes. My version works on the folder produced by the Pi unit and then generates several derived products:

  • A CSV file listing all images and associated metadata for each (temperature, humidity, etc.)
  • A CSV file listing each “blob” of interest in any of these images, including coordinates, size, significant colours, an identifier for a “track” that represents a presumed repeated capture of the same individual across multiple images, etc.
  • A folder contain cropped images for each blob that appears or changes between images

I will also store a timestamped copy of the configuration settings for the image segmentation as part of each output data set.

Track Editor

I have written a Python GUI that shows all blobs from each track as thumbnails, allows these tracks to be split or merged, uninteresting tracks to be deleted and a species or higher taxon to be added as an identification for each track. The outputs are a local taxon dictionary (for assisting entry of identifications – this output grows over time) and a CSV file with the identifications for each track. Since the track identifiers are changed by this tool, it also writes an updated version of the blob CSV file.

Event Reporter

I have not yet implemented this component, but it will take the data from the image, blob and track CSV files and produce a derived CSV file with minimum counts for each species or taxon recorded during the night, packaging this (along with all metadata from the configuration files) as a sampling effect dataset (Darwin Core Archive or Frictionless Data) ready for publication to GBIF or other biodiversity data platforms.

Training Set Manager

Given the outputs from the Track Editor, it will be possible also to build lists of blob images (and associated metadata) for each species identified. These should be managed to allow selection of a good training dataset to build a machine learning model for species identification. As well as the image content, the metadata will have good information on size, movement, time of appearance, etc. which may improve the models.

Model Builder

The images in the training set can be used to develop a machine learning model to identify the same species in subsequent samples. Metadata from associated configuration files will be captured to assist future interpretation.

Species Recognition

The final component will be a module that runs the machine learning model and generates similar data to the Track Editor (but with additional metadata). These results can then be fed directly to the Event Reporter or (more likely, especially in early phases) into a validation process.

Categories
Araba Bioscan

Araba Bioscan 22-29 October 2021

This is the first week since I completed a year’s samples to send for taxonomic sorting and curation at the Centre for Biodiversity Genomics in Guelph. My focus will now be on a small number of groups of interest, while retaining the rest of the specimens for metabarcoding.

Categories
Autonomous Moth Trap

Alternatives for Autonomous Moth Trap

This is the fourth post in a series:

Raspberry Pi 4 + Logitech BRIO + motion detection

The components used in my first autonomous moth trap (mostly following the Danish design, except for the LED tube) are:

The circuit design is available here.

The Logitech camera depends on a USB 3.0 connection, which requires a Raspberry Pi 4, which then necessitates a solution to vent the enclosure and use a fan. The power of this unit makes it possible to use motion detection to capture images. This has demonstrable benefit in producing time series imaging for active insects, but (on warm nights) results in images being captured almost every 2 seconds.

A sample from this model has been uploaded as a video.

Raspberry Pi Zero + Raspberry Pi HQ camera + time lapse

I wanted to test a more lightweight (and significantly cheaper) alternative that could more easily be deployed on battery power in the field.

The A3 LED light table in the original model seems to add little to the effectiveness of the system. Power is better devoted to running the moth light and the illumination from the light ring.

The Raspberry Pi Zero has significantly lower power consumption than the Raspberry Pi 4 and does not require a fan or venting. It is also compatible with the Raspberry Pi HQ camera and 6mm wide-angle lens. This camera has a larger image size than the Logitech Brio for less than half the cost and without the need for a USB 3.0 connection.

I have therefore constructed a second model using these components:

I have written a Python script (triggered as a cron job) to capture images on time lapse. The code (AMT_TimeLapse.py) is in Github along with other software and files for the project. A JSON configuration file controls various settings and the images are saved to a folder including a timestamped copy of this configuration.

I used the instructions here to add the real-time clock to the Pi: Adding a DS3231 Real Time Clock To The Raspberry Pi. The circuit diagram for this unit is available here. The number of UV lights is adjustable (3, 6 or 9).

An early result from this trap has been uploaded as a video.