Blog

Needles, haystacks and the technology response

Published on: August 2020

There’s hardly an ocean or sea which isn’t prey to some kind of illegal activity, or a witness to misfortune. Foreign and domestic fishermen plunder dwindling stocks across the world, while any country with a coastline is vulnerable to illegal activities – drug and people smuggling, environmental pollution and worse.

Government agencies, ranging from Coast Guards to Search and Rescue (SAR) crews, face two major challenges: locating and identifying targets quickly in vast oceans and seas challenged by a range of environmental and sea states; and then getting to them in time to undertake a rescue or gather evidence for a prosecution. The answer to both, says Australian company Sentient Vision Systems, is a portfolio of sensors strengthened by Artificial Intelligence (AI) enabled data fusion.

Sentient Vision Systems manufactures ViDAR, the world’s first passive, wide area, optical radar which has a proven detection rate for human head-size objects in the water of 96% – even in Sea State 6. The fusion of data from the primary detection sensors on an aircraft, such as ViDAR, radar and AIS and the secondary inspection sensors, such as turreted EO/IR sensors provide a higher detection rate for small objects, timely identification and enables a surveillance/SAR aircraft or rescue helicopter to get to the target as quickly as possible. When air platforms can cost $15,000 an hour to fly, that matters too.

There are cooperative targets (in relative terms) – mariners in distress and survivors in the water who are not trying to conceal themselves, even if they’re hard to spot. And then there are uncooperative targets, ranging from narcotics smugglers and terrorists to Illegal, Unregulated and Unreported (IUU) fishers, who try to make it as difficult to find them as possible.

In both cases, sensor fusion can increase the efficiency and economy of a surveillance/SAR mission. Imagine surveiling a suspicious sea area. Sensor fusion begins with AIS which every sea-going vessel should have enabled, though the majority of fishing boats aren’t equipped with it. This provides the basis of an integrated Local Operating Picture (LOP). Then an aircraft can conduct an initial search using ViDAR and compare the number of targets it detects with the number of AIS transponder signals.

Depending on the nature of the suspicious activity, the operator can focus on radar and ViDAR targets that don’t correlate with an AIS return and head straight for them. However, some illegal operators ‘spoof’ their AIS transponders to make it look as if their vessels are actually elsewhere, so even innocent looking vessels need to be investigated.

A passive ViDAR sensor will detect a range of surface  targets which present a  thumbnail image on the operator’s mission system and a target geolocation on a moving map display allowing the operator to cross cue to the target(s) for further identification. Meanwhile, the operator can determine, by cross-referencing the AIS, radar and ViDAR tracks, whether or not the target is suspect. If there is no target  detected, despite an AIS return stating this is where the vessel actually is, then you’re definitely looking at illegal activity.

ViDAR can fill capability gaps in radar surveillance. Above Sea State 2 or 3 radar performance is reduced – targets may be masked in high sea states; the target may be a small wooden or GRP vessel with a small or non-existent radar signature. A search employing an integrated, passive ViDAR and radar sensor provides an extremely high probability of detection, reduces operator workload on long, demanding missions and provides a higher level of tactical situational awareness which in turn helps operators detect and then identify targets of interest much more quickly.

For example, the Australian Maritime Safety Authority’s (AMSA) four Bombardier Challenger 604 SAR aircraft are fitted with a mapping system, weather radar, multi-mode surveillance radar, AIS system, EO/IR turret and passive ViDAR sensor. This level of integration enables the operator to build a LOP and also focus on a specific search. Introduced in 2016, the Challenger 604s have conducted more than 120 SAR missions and helped rescue 270 people.

Integrated sensor suites create that LOP aboard a single aircraft today. The next step, however, is to integrate data from sensors mounted on a fleet of separate airborne platforms operating as a team. A large UAV could possibly carry a lightweight surveillance radar, turreted EO/IR sensor and passive ViDAR sensor; however, smaller UAVs such as InSitu’s ScanEagle might carry only an EO/IR sensor and passive ViDAR sensor

In the near future, a mixed fleet of UAVs and manned platforms could conduct a synchronised search of a larger ocean and sea area, sending real time sensor track data to a ground control station which would then fuse all of the incoming data to create a Common Operating Picture (COP) to be shared across agencies and re-broadcast to manned SAR or surveillance aircraft.

Integration of other information from previous missions would determine whether a ‘pattern of use’ exists and if so whether it has been broken, by either suspicious behaviour or some kind of misfortune. This would help AI-enabled mission systems to detect previously unnoticeable suspicious or criminal behaviour, or predict the locations of missing craft or individuals. This in turn shortens the time taken for a SAR or response platform to reach the search area, detect and identify survivors or suspicious activity, and then take action.

The technology may be cool, but it must always serve the mission.

 

Click here to download