Author: Kitty Drok
Project Team: Dr Paul Hancock, A/Prof. Randall Wayth, Dr Xiang Zhang, Prof. Steven Tingay, and Mr Sean Mattingley, Dr Kevin Chai and Shiv Meka.
Any transient radio emissions from these sources could be detected by radio telescopes such as the Murchison Widefield Array(MWA) and the future Square Kilometre Array (SKA), and have the potential to provide new insights into how meteors react with the ionosphere as they fall to Earth. Being able to study meteors via their radio emissions would also give access to elusive daytime meteor showers that can’t be detected with optical telescopes.
The Desert Fireball Network (DFN) already has a network of 50 optical cameras across the Australian Desert, and software that can detect the brightest fireballs as they descend – the ones most likely to leave a meteorite. But the research team needed to also detect the faintest meteor streaks in the sky, and correlate those sightings against radio telescope data to identify any characteristic radio emissions.
Working with Mr Hadrien Devillepoix from the DFN the radioastronomy team installed a modified DFN camera with a narrower field of view and optimised for higher sensitivity, the ‘astrocam’, out in the Murchison to look at the same part of the sky as the MWA. Using the astrocam they collected photos during the nights of the Geminid meteor shower to correlate against the MWA data. The difficult part was to then find all of these rare meteor trails, faint as well as bright, in thousands of photos of the night sky to compare with any radio signals originating from the same time and place.
Mr Shiv Meka and Dr Kevin Chai from the Curtin Institute for Computation joined the project to build a machine learning algorithm to automatically detect even the faintest meteor trails in the mass of digital photos. To train it to identify meteor trails, Zhang sorted through 6,000 images and manually identified and annotated about 70 meteors.
“It was a thankless job”, admits Hancock. “The digital images are higher resolution than our computer screens – so if you display the full image the screen crops and interpolates pixels, and faint trails disappear. Xiang Zhang had to zoom in and pan around each image in smaller sections, and it was still easy to miss fine detail. She spent the entire day sorting through 6,000 images to find just 70 meteors. Unfortunately that wasn’t a large enough data set to train the convolutional neural net effectively. We’ve got years’ worth of data available to look through, but you’d go crazy searching through it all!”
To bypass the lack of training data and avoid weeks of manual searching for more of these rare events, Chai and Meka created artificial training images, taking pictures of the night sky without meteor trails from the data set and adding random lines in a variety of positions, orientations, lengths, thicknesses and light intensities to mimic meteor trails. He rapidly created a dataset of 30,000 simulated meteor trails which was then used to train the neural network.
The resulting algorithm was surprisingly successful at detecting meteor trails in the 6,000 real digital photos that had been manually examined and annotated by Zhang. It found Zhang’s 70 meteors, but also identified quite a few others. Hancock gleefully recounts: “Chai and Meka were initially disappointed that their algorithm wasn’t working well, and generating so many ‘false positive’ results. But when we manually re-examined the images identified in the results, in every case we found genuine examples of very faint meteor trails, or other transient phenomena such as satellites. It was far more sensitive to faint meteors than we expected it to be.”
The algorithm is now being trained on a larger artificial data set that incorporates different weather events such as high cloud, different phases of the Moon, and different amounts of Galactic background (the Milky Way) in the sky at different times of year, to enable it to better identify meteor trails under varying conditions.
“We’ve now got it up to 100% accurate on real data”, says Chai. “We don’t actually know how many meteors are in those images – it’s a big universe – but 100% of our predictions are meteors or other recognised transients.”
The team is now working to calibrate the astrocam images so that the brightness of stars and transient events it captures can be measured against a universal scale. From there they can transfer the calibration to the cameras in the DFN network, allowing brightness data to be accessed within the petabytes of data stored in the Pawsey Supercomputing Centre archives since the DFN went digital four years ago.
“So much extra science can be done because all of the information is already in the data”, Hancock enthuses. “We just need to develop these algorithms to extract it. There’s lots of information in the brightness profile we could explore – are bright fireballs simply bigger versions of the faint ones, or are they qualitatively different? How does the brightness of the meteor change through its trajectory? What does that tell you about the density of the atmosphere or the fragmentation of the meteor? Over four years of data, how does the incidence and brightness of meteors across Australia vary?”
The algorithm will be able to sift through years of accumulated observations, finding rare events like meteor trails to compare against the radio emissions recorded at the same time and location by the MWA. Automating feature detection in the optical and hence the radio data will allow the team to detect and study all sorts of transient events, from meteors and supernovae to space junk re-entering the atmosphere.
Hancock sums it up: “A rule of thumb in astronomy is that if you build a new instrument, or develop a new capability, you will find new things. How you look for things in the Universe affects what you actually see. So if you look at the Universe in a new way, like combining transient signals in the audio and radio, you will see things no-one else has seen before. There are things we know we will find, daytime meteors for example, but the real excitement is in what else may be out there. Who knows what will turn up once we start looking?”
The dataset is already there, but it will take a machine learning algorithm to find the matching needles in the haystack so we can read it properly.