Autonomous Moth Trap

Time Lapse module for Autonomous Moth Trap

This is the sixth post in a series:

This post describes the script used on the Raspberry Pi Zero to capture images for the Autonomous Moth Trap. This script makes use of common utility functions defined in

The software is designed to control the components of the following circuit but can be configured to accommodate a number of alternatives.


  • Turn on the moth light (LED light tube) and ring light at a scheduled time (or multiple scheduled times (e.g. several one-hour periods through the night)
  • For each operation period, take a specified number of timestamped images at regular intervals
  • Turn off the moth light and ring light after operation
  • Optionally use the status light to indicate operation and when images are captured
  • Optionally get temperature and humidity readings from a DHT22 (or DHT11) sensor and link these to each image
  • Save metadata documenting the settings associated with the each operation period
  • Support alternative arrangements of GPIO pins
  • Use a real-time clock to track time when not connectd to a network


  1. Scheduling is controlled by crontab entries to run the script (“python3 /home/pi/”) at defined times.
  2. reads the amt_timelapse.json configuration file located in the current folder (or an alternative JSON configuration file supplied as the first command-line argument).
  3. amt_timelapse.json serves as a container for: the unit name; basic metadata on the unit (Raspberry Pi version, camera type, lighting options); control parameters for PiCamera; options to enable the temperature/humidity sensor and status light; destination folder for output; and options to override the default GPIO pins. This will be expanded to hold other metadata (coordinates, contact information) and could also transfer wifi settings, schedule setup parameters, etc.
  4. creates a subfolder in the destination folder to receive the images and copies the JSON configuration file into the subfolder.
  5. The script then sets up the GPIO pins.
  6. The previous state of the status light (red/green/off) is remembered and the light is set to green if enabled, otherwise off.
  7. The moth light and ring light are switched on.
  8. If a temperature/humidity sensor is enabled, it is now powered up (unless it sensor VCC pin is connected directly to a 3.3V pin) and initialised.
  9. If specified, the script now sleeps for a specified number of seconds (since there is likely to be little point in imaging before insects can respond to the light).
  10. The camera is now enabled and set to preview.
  11. The script now captures a series of the specified number of images at the requested interval. (If the number is set to -1, the series is unbounded – this may be appropriate if the unit is intended to run until the battery fails.) Each image is named with a timestamp and with temperature and humidity if these are being sensed. If the status light is enabled, it is switched to red for each capture.
  12. The lights are now switched off.
  13. If required the temperature/humidity sensor is turned off and the status light is reset.
  14. Throughout, progress is written to a log file.



Many websites recommend use of the adafruit_dht package to read DHT11/DHT22 sensors. I was unable to get this successfully working on the Pi Zero – it consistently reported wiring issues or incomplete buffers, so I instead adopted a solution using pigpiod (see The daemon is likely to consume extra power, so I will evaluate whether only to start and stop it while capturing images.

Simplifying configuration and file access

For a future version, I intend to add an IP67 female USB 2.0 socket and another push button. This is to make it easier to transfer files to and from the system and to lower the technical threshold for potential users. My thinking is as follows:

  • The user can edit the JSON file (which could be extended to offer specify settings for crontab). A simple command-line or form-based interface could be offered on Windows/Mac/Unix to create/validate these files before copying them to a USB storage device.
  • The user plugs the USB storage device into the unit’s external USB socket.
  • The user pushes the extra button on the unit and triggers a script waiting for a GPIO interrupt.
  • If a USB storage device is detected, the script turns on the status light to indicate operation.
  • If the unit holds untransferred folders of images, the script transfers a copy to the USB storage device.
  • If the transfer is successful and all images are on the USB storage device, EITHER the script automatically deletes the on-board copies OR does so only in response to a configuration flag.
  • If the USB storage device includes JSON files, the script transfers these to the unit and makes any associated changes to crontab, etc.
  • The script writes a human-readable report/log to the USB storage device to document all stages and report the current configuration settings and schedule.
  • The script tuns off the status light.

This would make it possible to use the device and save images, etc. even in the absence of wifi (or using a Raspberry Pi Zero without wifi support).

Autonomous Moth Trap

Software components for Autonomous Moth Trap

This is the fifth post in a series:

A key goal for the Autonomous Moth Trap project is to make it easier to collect and manage time series data on insects using units that operate in the field.

The Danish project generously shared both the software that was deployed on their Raspberry Pi systems and the Python scripts they have developed for processing the resulting images.

I have used some of the code and many of the concepts from these scripts in my own system. I have created a new Github project ( to manage and share my code and other digital assets. I welcome review, bug fixes and reuse.

There are at least eight software components that should form the core for a software-data ecosystem for this trap and that may well apply to other related use cases. I have existing implementations for a number of these, although many improvements are possible. Others can follow as more images are collected.

The figure at the top of this post shows some of the relationships between these components. The two components to the left execute inside the trap (using the Raspberry Pi as the processor). The rest execute on a desktop computer or laptop (Windows/Mac/Unix).

The path indicated by the green arrows reflects my current focus and what I hope to achieve in the near future. This involves automated collection of images, software assistance in deriving a species list and minimum counts (plus a range of metadata) for each species and then publication as a sample event dataset to GBIF or other public platforms.

As the number of identified images increases for a given location or region, it will become possible to execute the path indicated by the orange arrows, building a training set from identified images and then training a machine learning model for image recognition. There is also potential to integrate machine learning into the image segmentation stage to improve classification of interesting and uninteresting objects and to enhance recognition of the same individual in multiple images.

Once a model has been trained and works, it will be possible to activate the path indicated by the purple arrows and automate much more of the process. Quality control will be important and there should probably be other links that verify the identifications and feed more identified images in to retrain the model.

The following are brief notes on each of the components indicated. More detail will be presented in subsequent posts.

Time Lapse Capture

I have a working version of this component, written in Python and controlled with a JSON configuration file. It controls the lights (moth light, ring light for illumination), a temperature/humidity sensor and the camera (interval and number of images, brightness, contrast, saturation, sharpness and JPEG quality). The output is a folder containing a series of images with timestamps and temperature/humidity readings in the filename (but I plan to add these readings to the EXIF too), along with a timestamped copy of the JSON configuration file (since this contains metadata that may be useful later). My Raspberry Pi Zero unit uses this component triggered as a cron job (or as multiple cron jobs at different times of the night).

Motion Capture

The version implemented by the Danish team uses the software developed by the Motion project, along with some small Python scripts to control lights, etc., all controlled via cron jobs. I have modified the scripts on my Raspberry Pi 4 unit to add temperature/humidity readings. I expect to expand my Time Lapse Capture component so it uses the Motion software as an alternative mode alongside Time Lapse. This will allow the configuration metadata to be largely identical for both options.

Segment Images

I have again worked from software developed by the Danish team but rewritten large sections to reflect my wishes. My version works on the folder produced by the Pi unit and then generates several derived products:

  • A CSV file listing all images and associated metadata for each (temperature, humidity, etc.)
  • A CSV file listing each “blob” of interest in any of these images, including coordinates, size, significant colours, an identifier for a “track” that represents a presumed repeated capture of the same individual across multiple images, etc.
  • A folder contain cropped images for each blob that appears or changes between images

I will also store a timestamped copy of the configuration settings for the image segmentation as part of each output data set.

Track Editor

I have written a Python GUI that shows all blobs from each track as thumbnails, allows these tracks to be split or merged, uninteresting tracks to be deleted and a species or higher taxon to be added as an identification for each track. The outputs are a local taxon dictionary (for assisting entry of identifications – this output grows over time) and a CSV file with the identifications for each track. Since the track identifiers are changed by this tool, it also writes an updated version of the blob CSV file.

Event Reporter

I have not yet implemented this component, but it will take the data from the image, blob and track CSV files and produce a derived CSV file with minimum counts for each species or taxon recorded during the night, packaging this (along with all metadata from the configuration files) as a sampling effect dataset (Darwin Core Archive or Frictionless Data) ready for publication to GBIF or other biodiversity data platforms.

Training Set Manager

Given the outputs from the Track Editor, it will be possible also to build lists of blob images (and associated metadata) for each species identified. These should be managed to allow selection of a good training dataset to build a machine learning model for species identification. As well as the image content, the metadata will have good information on size, movement, time of appearance, etc. which may improve the models.

Model Builder

The images in the training set can be used to develop a machine learning model to identify the same species in subsequent samples. Metadata from associated configuration files will be captured to assist future interpretation.

Species Recognition

The final component will be a module that runs the machine learning model and generates similar data to the Track Editor (but with additional metadata). These results can then be fed directly to the Event Reporter or (more likely, especially in early phases) into a validation process.

Autonomous Moth Trap

Alternatives for Autonomous Moth Trap

This is the fourth post in a series:

Raspberry Pi 4 + Logitech BRIO + motion detection

The components used in my first autonomous moth trap (mostly following the Danish design, except for the LED tube) are:

The circuit design is available here.

The Logitech camera depends on a USB 3.0 connection, which requires a Raspberry Pi 4, which then necessitates a solution to vent the enclosure and use a fan. The power of this unit makes it possible to use motion detection to capture images. This has demonstrable benefit in producing time series imaging for active insects, but (on warm nights) results in images being captured almost every 2 seconds.

A sample from this model has been uploaded as a video.

Raspberry Pi Zero + Raspberry Pi HQ camera + time lapse

I wanted to test a more lightweight (and significantly cheaper) alternative that could more easily be deployed on battery power in the field.

The A3 LED light table in the original model seems to add little to the effectiveness of the system. Power is better devoted to running the moth light and the illumination from the light ring.

The Raspberry Pi Zero has significantly lower power consumption than the Raspberry Pi 4 and does not require a fan or venting. It is also compatible with the Raspberry Pi HQ camera and 6mm wide-angle lens. This camera has a larger image size than the Logitech Brio for less than half the cost and without the need for a USB 3.0 connection.

I have therefore constructed a second model using these components:

I have written a Python script (triggered as a cron job) to capture images on time lapse. The code ( is in Github along with other software and files for the project. A JSON configuration file controls various settings and the images are saved to a folder including a timestamped copy of this configuration.

I used the instructions here to add the real-time clock to the Pi: Adding a DS3231 Real Time Clock To The Raspberry Pi. The circuit diagram for this unit is available here. The number of UV lights is adjustable (3, 6 or 9).

An early result from this trap has been uploaded as a video.

Autonomous Moth Trap

Hardware and Software updates to Autonomous Moth Trap

This is the third post in a series:

This post provides notes on modifications and enhancements to my implementation of the automated moth trap designed and implemented by Kim Bjerge and colleagues (Bjerge, K.; Nielsen, J.B.; Sepstrup, M.V.; Helsing-Nielsen, F.; Høye, T.T. An Automated Light Trap to Monitor Moths (Lepidoptera) Using Computer Vision-Based Tracking and Deep LearningSensors 202121, 343).

High-power UV LED tube

High-power UV LED strip

The original version used a 15W fluorescent UV tube as the attractant. Unfortunately, these tubes lose brightness with use and therefore add an unnecessary amount of variation (given my desire to standardise as much as possible).

I have therefore built a replacement with nine high-power LEDs (six UV and one each of white, green and blue) in an acrylic tube (and with an aluminium extruded channel for heat dissipation). This seems to work well and is readily powered from the same 240V AC to 12V DC power supply as the Raspberry Pi, ring light and light table.

Rebuild of housing

I have also completely rebuilt the unit in a new housing that includes better controls (soft shutdown of the Raspberry Pi via a push button, external control of the light table brightness, override switch to turn on all lights independently of the Raspberry Pi, and status LED).

Most significantly, it now includes an external temperature and humidity sensor so that local environmental readings can be added to all images.

Image processing software

The goal is to train a machine learning model for automated species recognition. Several stages are planned:

  1. Run one or more traps nightly to collect images annotated with date/time, temperature and humidity
  2. Segment and organise cropped images for each significant blob in the source images, annotating them with position, size, colour classification and track (series of images for the presumed same blob)
  3. Curate the cropped images to validate tracks (including joining or breaking as appropriate) and associate species (or higher taxon) identifications
  4. Build reference sets for identified species (annotated with all the above elements)
  5. Train a model to identify these species and count associated tracks per night
  6. Run the trap continuously to generate time-series data

The Danish team developed Python software (MCC-trap) for the second of these stages., but their focus was on identification of a small set of training species. I chose to refactor all their code to focus on my needs, in particular to export and annotate all blobs and organise these in tracks for my subsequent stages.

MCC-trap derives tracks using two primary measures of similarity/distance between blobs in consecutive images: size of the blob and distance apart. In addition, the code allows for a blob/track to reappear within five frames.

I chose to use additional dimensions to assess similarity/distance. My code now assesses five factors each as costs on a 0.0 – 1.0 scale:

  • Distance: Blob distance is calculated as a cost as in MCC-trap, i.e. as a fraction of the diagonal size of the image. However, if the centroids of the blobs are within the central 20% of each dimension of the other image, and if the larger blob is no more than 20% larger than the smaller, they are treated as identical (overriding all other cost calculations).
  • Size: Blob size is compared using the ratio of the number of pixels in the larger blob to the number of pixels in the smaller. If the ratio is four or higher, the cost is given as 1.0. Otherwise the cost is measured as (ratio – 1) / 3.
  • Bearing: If a track already includes at least two blobs, direction is assessed using the diffrence between the bearing of the line joining the last two blobs and the bearing of the line between the last blob and a candidate blob. The cost is 0.0 if the bearing is unchanged and 1.0 for +180° or -180°.
  • Colour: An RGB histogram is generated for the pixels inside the blob and a set of letters is associated with the blob if the proportion of pixels near each of the eight vertices of the histogram exceeds a defined threshold. At present, the histogram is a 2x2x2 cube and the threshold for each of the eight sectors is 2%. This is still under investigation.
  • Age: The cost for linking to a blob in the previous frame is 0.0, increasing to 1.0 for the fifth last frame.

These five costs are each assigned a weight, currently 4 each for Distance and Size, 2 each for Bearing and Colour and 1 for Age. Hence, the actual cost estimates are distances in a 4x4x2x2x1 five-dimensional space. As with MCC-trap, these costs are fed into the Hungarian algorithm to solve the problem of the lowest-cost pairing of new blobs to existing tracks.


As shown in this gallery of blobs sorted by track id, the processing outlined above seems to be giving generally good results with few false positive matches between new blobs and existing tracks but rather more cases in which a track is broken into segments.

Two videos have been uploaded to Flickr to illustrate progress so far.

ArabaAMT-20210925 presents 5027 images from an eight-hour period during the night of 24-25 September 2021 as a 10-minute 5-fps video with annotations showing the track ids, blob ids and cost measures. A wide variety of moths (including large numbers of Tortricidae) appear during the video, many of them identifiable.

ArabaAMT-20210613 has been built from 947 images taken during the night of 12-13 June 2021, when many fewer insects were active. This video has been used as a test of annotating tracks with identifications. Insects have been labeled as they appear and include:

  • LepidopteraLepidoscia sp. (Psychidae), Lepidocroca sanginolenta (Oecophoridae), at least one other Gelechioidea, Meritastis sp. and other Tortricinae, Chloroclystis filata, Capusa senilis and a Nacophorini (probably Fisera sp.) in flight (Geometridae), and Ectopatria horologa (Noctuidae).
  • NeuropteraMicromus tasmaniae (Hemerobiidae), Chrysopidae.
  • Hymenoptera – Ichneumoninae.
  • Diptera – Muscoidea plus other flies.
Autonomous Moth Trap

Autonomous Moth Trap Hardware Revisions

This is the second post in a series:

The Autonomous Moth Trap project seeks to build on the work of Kim Bjerge and his colleagues in developing a simple system controlled by a Raspberry Pi 4 for capturing images of live moths attracted to a UV light and a set of tools using OpenCV image processing and machine learning to recognise the moths imaged. My earlier post documented the results of building a version of the trap that more or less exactly replicated the setup of the Danish design, other than rewiring the LED light table so that it could be controlled with the other lights rather than requiring manual intervention.

Standardisation is key to ensuring that biodiversity observations are as comparable as possible across time and space. I have been keen not to modify the Danish system unnecessarily. However, I would like to solve some challenges now rather than leaving them until after I have been running my system for a while.

My most significant concern has been around the use of a 15W fluorescent tube for the main light that attracts the moths. These work and attract moths, but the electronics to run these well on DC are complicated and the tubes tend to blacken at one end over time and lose brightness, which inevitably introduces variation and difficulty in comparing the resulting data. It therefore seems appropriate to replace the tube with an array of high-power LEDs. I have decided to build a trap with 9 such LEDs. This significantly changes the power requirements for the trap and is probably only appropriate for use with mains power, but I want to experiment with the capabilities of the LEDs before attempting to construct a battery-powered version.

Secondly, I wanted to collect some basic environmental measurements as the images are collected, so I have added a basic temperature and humidity sensor to be read by the Raspberry Pi.

Additionally, the first version I built lacked some basic control features:

  • Shutting down the system safely (rather than simply cutting power) required an ssh connection to the Raspberry Pi. I wanted to add at least a tactile switch to trigger a safe shutdown and have modified this Pi Power Button example.
  • The Raspberry Pi fan was wired straight to the 5V pin and continued to run, even when the Raspberry Pi was shut down. I wanted it only to run while the system was active (and potentially to disable it below some temperature) and have used this approach from the Raspberry Pi Forum.
  • The only way to be certain that the system was running was to listen for the fan (or to use ssh or some other access protocol). I wanted to add an multicoloured LED so I could show operational status. I considered using a full-RGB LED (four pins) but only require a bicolour LED (red/green, two pins).
  • On Twitter, Hernán L. Pereira pointed out the need for a flyback diode on the resistor, so this has also been added to the circuit.
  • I’ve also added a second method to turn on the lights for testing purposes, a toggle switch connecting 3.3V power to the same transistor normally activated/deactivated by a cron job on the Raspberry Pi. A tactile switch would have been appropriate, but I wanted to avoid confusion between the power on/off switch and the light testing switch.

The diagram at the top of this post was created using the very friendly tools at, where it can be accessed here. It shows the circuit as currently planned. I have organised the use of the Raspberry Pi pins to keep the circuit clear and uncluttered.

Autonomous Moth Trap

Autonomous Moth Trap Project

This project seeks to build an automated moth trap with a machine learning identification model for Canberra moths (and other insects) based on the design published earlier this year by Kim Bjerge and colleagues in the journal Sensors:

Bjerge, K.; Nielsen, J.B.; Sepstrup, M.V.; Helsing-Nielsen, F.; Høye, T.T. An Automated Light Trap to Monitor Moths (Lepidoptera) Using Computer Vision-Based Tracking and Deep LearningSensors 202121, 343.

I want to thank Kim Bjerge and Toke Thomas Høye for their generous assistance answering questions and sharing code.

This is the first post in a series:


I started using a moth trap in the mid-nineties in the UK. At that time, I used a Heath trap with a 6W actinic tube and drew pictures of moths with crayons since I did not have a digital camera. Over the years, I’ve used a wide variety of moth traps and moth lights, including 15W actinic tubes and 125W MV lamps in various configurations, mainly from Anglian Lepidopterist Supplies in the UK, and the LepiLED lights from Dr Gunnar Brehm. Since 1999, I’ve used a series of different digital cameras to document the insects I’ve attracted (see Flickr and iNaturalist).

All of these efforts have in some small way contributed to knowledge of biodiversity. Observation records flow into the Global Biodiversity Information Facility and the Atlas of Living Australia. However, every one of these records is a response to random and non-standard circumstances. In one year, I may operate a light for many nights and photograph many insects. In another year, I may do much less.

This lack of standardisation is common throughout natural history and citizen science. In recent years, there has been growing concern at the apparent collapse in insect numbers in many regions of the world, but there are few places from which we have genuinely comparable data over long periods. We therefore cannot easily assess how significant the changes have been. We certainly have no scalable way to monitor insect populations and understand how composition and abundance varies across space and through time.

I have therefore been interested for a long time in ways that we could automate detection and monitoring for insects and other more cryptic groups. One obvious route is to build the tools for DNA-based monitoring. Hence my enthusiasm for Malaise trapping as a path to building the necessary reference libraries.

One other idea I have been keen to explore is relatively simple automation of imaging of insects coming to light. I sketched this (possibly self-explanatory) concept in July 2018, but never found the time to build it. It basically involves bright insect-attracting lights, on a timer so that they can leave the insects in peace before dawn, and (in my sketch) two cameras to image species landing on a vertical surface and on the ground below.

A sketch of a possible automated camera trap for moths, 31 July 2018

The purpose would just have unsupervised collection of photographs that I would then upload to iNaturalist or elsewhere with my manual identifications.


Early in 2021, I came across the paper by Kim Bjerge and his colleagues on an automated trap to detect and recognise a subset of Danish noctuid moth species.

Figure 1 from Bjerge et al. 2021: The portable light trap with a light table, a white sheet, and UV light to attract live moths during night hours. The computer vision system consists of a light ring, a camera with a computer and electronics, and a powered junction box with a DC-DC converter.

Bjerge et al.’s design involves the following components:

  • Raspberry Pi 4 to control lights and webcam (with motion detection software)
  • 15W actinic tube operated with 12V DC (to attract moths)
  • A3 LED light table (as contrasting surface on which moths can rest)
  • Logitech BRIO 4K Ultra HD webcam
  • LED ring light (to illuminate light table from camera side)
  • A circuit that allows the Raspberry Pi to turn the moth light and the ring light on and off at scheduled times

The Raspberry Pi software includes Python scripts to turn the lights on and off (through one of the GPIO pins) and to run motion detection and the Motion imaging software (Motion Project at GitHub).

Images collected by the system can then be processed using the Python Moth Classification and Counting software developed by Kim Bjerge (MCC-trap at GitHub). As described in the Sensors paper, this detects blobs in the images, tracks movement of the (presumed) same blob between images, and uses a Convolutional Neural Network model to classify the blobs based on a training set. In the Danish experiment, the training set focused on a small number of common noctuoid moths (eight classes).


I used the following components to build my version of the trap:

I had not attempted any electronics since I was a young child with my father (other than some atrocious attempts to solder loose wires). The following resources helped me:

  • This YouTube Soldering Crash Course showed me everything I had got wrong with my earlier efforts at soldering – it’s actually all surprisingly easy
  • Charles Platt’s book, Make: Electronics, is a very helpful guide to the basics of using simple components in circuits

Some notes on issues I encountered or decisions I made:

  • The Huion light box, like perhaps all other models on sale, is turned on and off and dimmed or brightened with a touch sensor behind the glass. Whenever the unit is first powered on, the light is initially off. As I understand it, the Danish team turn the light box on manually when the trap is in use. The backing sheet on this Huion unit is attached with a tacky glue and can be peeled back. This makes it possible to desolder the wires that feed the LED strip and attach them directly to where the 12V DC input connects. This bypasses the touch sensor. The touch sensor in this model has negligible resistance when the light box is at its brightest, so no additional resistor is required when bypassing it.
  • The change to the light box allowed me to turn it on and off automatically in conjunction with the other lights. I included the second potentiometer so that I could vary its voltage below 12V. (The first potentiometer was already in the reference circuit to control the light ring.)
  • Bjerge et al. used a 12V battery to power their trap. Since I plan to operate the trap in areas with access to mains electricity, I used the laptop-style power supply listed above. I used the second DC-DC converter to power the Raspberry Pi at 5.1V from the same source. (The first converter was to provide 5V power for the light ring, but that circuit is off until the Raspberry Pi turns it on.)
  • Bjerge at al. included a 75 Ohm resistor to limit the voltage across the transistor and coil of the relay. Using the same resistor in my circuit with a mains DC supply that is doubtless at a higher voltage than a battery, the Raspberry Pi correctly activated the transistor through its GPIO pin, which caused the relay to close and power the lights. However, the relay remained closed once the GPIO pin was reset to 0V. Replacing the 75 Ohm resistor with 300 Ohm allowed the circuit to work as expected.
  • I wired everything inside the enclosure, with holes for the power cable, the cables for the three lights, the lens of the webcam and the ventilation grille (with two layers of the fly screen). The vent was to ensure that the Raspberry Pi does not overheat and seemed the simplest reasonably rain- and insect-proof solution I could find.
  • I printed a 3D model in ABS to hold the various components in place.
  • I mistakenly tried using a shorter USB cable to connect the camera to the Raspberry Pi. I used a USB 3.0 port but was stupid enough not to know that USB 3.0 cables are different from USB 2.0 ones. The short cable was not compatible and limited the camera to HD instead of Ultra HD 4K resolution.
  • It would have been better to have used an external knob and potentiometer to adjust the brightness of the light box (rather than the one I soldered into the circuit).
  • Using UV LEDs (as with the LepiLED) might be a simpler solution in place of the 15W tube. I may experiment with such an alternative.


Moth trap in operation

The webcam is positioned around 220 mm from the light table. It is configured to focus at this range and to use an exposure of around 130 ms. The light box is kept relatively dim simply to offer some contrast and most light comes from the ring light. I am using a white cotton pillowcase over the light table to provide an attractive landing surface for insects.

I’ve set the trap to come on automatically at 18:00 and turn off around 05:00. The cron settings are:

59 17 * * * python /home/pi/
00 18 * * * motion
01 18 * * * /home/pi/
00 05 * * * pkill motion
01 05 * * * python /home/pi/

At present I control the Raspberry Pi entirely through a PuTTY telnet connection and access the images by FTP through FileZilla. I plan to automate transfer of the images to a computer so they can be processed each day.

Right now, it’s winter in Canberra, often reaching below freezing for some of the night, so there are relatively few insects. I’ve uploaded some examples (including a number captured only at HD resolution) to iNaturalist.

The trap is now more or less ready for collecting what I hope will be many thousands of images in the warmer months.


The MCC-trap software has not been trained for Australian moths, but this video is a test of how it works with the set of images collected overnight, 3/4 June 2021:


For the coming months, the trap will operate as a wildlife camera trap. I will segment the images, add identifications and build training sets for classifying the commoner species here. I’d like to get to the stage where I could even fully automate some records being submitted to iNaturalist.

As can be seen in the video, the out-of-the-box processing selects a fair number of background areas and misses some insects. It may be useful for me to spend some time training a model first to distinguish between things that interest me (insects and other animals) and things that don’t (shadows, blurs, etc.). This should simplify extraction of the segmented images for training an identification model.