ISR signal processing at the edge

Photo courtesy of the U.S. Department of Defense

Military users of signal processors seemingly want it all: parts that can process more data but be less detectable to the enemy, transmit data more quickly but don’t heat up from the effort, and operate at extremely powerful levels but are lightweight and ideally palm-sized. These requirements present an obvious challenge for engineers designing these processors for intelligence, surveillance, and reconnaissance (ISR) applications. The various solutions to these ISR demands involve artificial intelligence, machine learning, classification algorithms, and sensor fusion.

The amount of data that needs to be processed in intelligence, surveillance, and reconnaissance (ISR) applications is nearly endless and frequently comes from varying sources. Identifying specific objects of interest within this barrage of data requires a very sophisticated level of processing; on top of the sheer amount of material coming in, much of the information is often very sensitive.

“The amount of data to be processed continues to increase,” says Shaun McQuaid, director of product management at Mercury Systems in Andover, Massachusetts. “What we’re dealing with is often called a ‘big data problem,’ which needs to be solved on the platform itself, and I think this is a trend we’re seeing more and more of. In order to deal with big data problems, you have to be able to leverage a data center solution, which is what we’re focused on here.”

In other words, the ability to take that glut of sensor data and make sense out of it requires a huge amount of processing on the platform, a process that enables users to sift through the signals on the receiving end and decide which countermeasures should follow. All of this requires acquisition and translation of real-time ISR imagery – or quickly and efficiently making sense of the data being recorded – a processing capability that calls for significant speed and power.

These considerable modern-day processing requirements have led to what industry professionals claim to be revolutionary design characteristics. Driving the thought behind ISR processors’ builds now are trends including machine learning-enabled information extraction; significantly reduced size, weight, and power (SWaP); and the ever-increasing demand for higher bandwidth.

“The ability to detect objects of interest, while filtering out noise and transmitters that you don’t want to see, requires a never-ending increase in the amount of processing capability needed,” says Denis Smetana, senior product manager, DSP products, at Curtiss-Wright in Ashburn, Virginia. “The trend has always been to continue to exploit and leverage new developments within computer architectures and capabilities.”

FPGAs display numerous possibilities

The capabilities military customers are asking of design engineers are emphatically not cosmetic desires to have the newest, shiniest equipment. Industry professionals find that as threats become more frequent and sophisticated, so also must the pace of technology development advance.

While the military is known to be slower to upgrade when it comes to the latest and greatest gear for a multitude of logistical reasons, the inability to keep up with the same technology international competitors are using is no longer an option.

“Everything is becoming agile, and the ability to do that quickly is our new mission,” says John Bratton, product and solutions marketing director at Mercury Systems. “What used to take weeks, months, years to address is now broken down into hours. Devices like field-programmable gate arrays (FPGA) enable that kind of quick adaptation of technology. as does interoperability and scalability that is driven by standards like Sensor Open System Architecture (SOSA).”

FPGAs – made up of hundreds of thousands of cells that are capable of being programmed to do nearly anything – by design are well-suited to perform parallel processing. If a lot of sensor data is arriving in parallel, FPGAs are good for repetitive functions on a wide array of input data streams coming in. In contrast, general-purpose graphics processors (GPGPUs) tend to be better at going through a chain of command and doing more signal-type processing.

“In these ISR applications, you tend to have a mix of FPGAs or GPGPUs doing the parallel processing and general-purpose x86-based processors doing the serial processing,” Smetana says. “And what we’re seeing today is really a merging of technology where a mixture of FPGA, DSP, general-purpose processing such as ARM cores, and GPGPU cores may all exist within a single device.”

This seeming merger of technologies makes it so that instead of military customers asking for either GPU- or FPGA-specific solutions, the questions have now become centered around how they can work better together. This way, rather than manufacturers having to create a processor from scratch to meet the needs of the military customer, FPGAs instead become essentially programmable building blocks.

“What we’re talking about is an engineered building block,” McQuaid says. “Mercury ruggedly packages GPGPU, FPGA, and data center processors with their associated memory into building blocks that they can quickly configure into open system architecture solutions in various standard form factors.”

These advances make programming FPGAs more accessible to software engineers, which will prove to be hugely beneficial for the implementation of artificial intelligence (AI) in ISR signal processing.

The introduction of artificial intelligence

“The computational horsepower required for this type of detection is much greater than traditional match-filtering techniques in FPGAs, but by combining the strengths of FPGAs and GPUs, these systems are moving from the theoretical realm to the realm of practical reality,” says Phillip Henson, senior product manager, DSP, at Abaco Systems in Huntsville, Alabama. “The addition of dedicated logic to support deep neural networks in FPGAs is advancing their use in this space.”

With the processing power offered by GPUs and FPGAs working together, engineers now see programming these processors with built-in machine learning (ML) capabilities built as more of an achievable feat. Xilinx in particular is one of the prominent players when it comes to integrating AI engines into its processors.

“Maybe there are signals that you might want to identify as they come in,” says Noah Donaldson, chief technical officer at Annapolis Micro Systems in Annapolis, Maryland. “Previously, a human might have had to sit there and watch the data go by and observe a signal. Now, it’ll be easier and easier to program a machine to do that on its own.”

Artificial intelligence really becomes a pivotal capability when it comes to comprehending signal data that can be far too cryptic for a human to break down and understand. Complex combinations of frequencies, amplitudes, and signatures are difficult to physically see, but a processor can learn to pick those readings up and classify accordingly.

“With the use of artificial intelligence,” McQuaid says, “we can identify patterns, align them with past experience, and potentially come up with a countermeasure much more quickly than you could if you were looking at something for the first time.”

Manufacturers are also building autonomous platforms where all of the sensors plug into a central processing platform. This phenomenon, called sensor fusion, reduces the need for proprietary sensors that usually each have their own sensor-processing chains.

“It’s much more efficient form a compute and hardware point of view,” Bratton says. “And it’s much more affordable and enables smaller platforms.”

SWaP plays an important role

Constant calls to shrink the size of platforms and processors is a major design factor, as design engineers take the appropriate measures to ensure the parts operate at powerful levels, take up less space, and remain cool even at top operating speeds.

“There’s a desire to take the processing that used to be only available in a fairly large form-factor module, or set of modules, and squeeze it down to something that’s incredibly tiny; so trying to serve a whole range of small to large, there’s a lot of area to cover,” Smetana says. “So, it’s helpful to make the types of processing in between those ranges scalable, but it is challenging to use the same type of technology in that wide range of form factors.”

One major challenge that comes from shrinking the size of a high-powered processor: managing temperature. With larger form factors, keeping the machine cool is much easier to achieve than when military customers ask for a fast, powerful, and undetectable processor that also happens to be incredibly small. The constant requests for better thermal management are driving innovations in processor design and management.

“Cooling really does bring a lot to the table in terms of SWaP performance, and also delivers greater reliability and deterministic processing as throttling back is avoided,” Bratton says. “We’ve got some great cooling solutions, some of which are air-cooled using a management system instead of the traditional, unmanaged CFM approach, and liquid cooling, which often times can be the fuel on the platform itself.”

Industry attempts to manufacture universal SWaP solutions for processors will result not only in more overall reliability but will also create more opportunity for adaptability across platforms. The SOSA effort has played a significant role in establishing these industrywide commonalities.

The influence of SOSA on signal processing

The U.S. government’s SOSA initiative creates modular open systems architecture specifications to enable re-use of key sensor components across multiple platforms and services.

“Just like how you want to easily be able to upgrade your computer and maybe get a faster processor, they want to be able to do the same thing in their system,” Donaldson says. “That’s where things like the SOSA initiative help to achieve that easier upgrade path because it sets standards for how you design processing boards.”

What SOSA aims to do is set those standards so manufacturers and military customers alike can move on to the next-generation technology, unplug the old part, plug in the new part, and operate it more efficiently.

“This approach reduces the level of technical risk.” McQuaid says. “It also reduces schedule risk because we’re not building things from scratch; we’re taking an assembly of building blocks and putting them together to meet a particular need, which reduces cost because we don’t have to invest in a new solution every single time a new requirement comes along.”

Plug-and-play capabilities in ISR applications specifically foster an environment for expeditious fielding of technology. Signals intelligence simply doesn’t have the luxury of waiting the 10 years known to be typical of military system deployment.

Where DoD funding is headed

As military threats continually evolve, it has become substantially more apparent to the Department of Defense (DoD) that ensuring warfighters are equipped with proper responses to such threats is critical.

“DoD funding is definitely rising in response to many real-world threats including ultrasonic missiles, unstable governments with nuclear weapons, persistent attacks on intellectual property, and terrorism,” says Rodger Hosking, vice president and cofounder of Pentek in Upper Saddle River, N.J. “Gaining better information through advanced ISR is essential in countering these threats.”

The goal for ISR signal processors is to see longer, farther, and more precisely than the opponent; getting there without financial help from the DoD would prove to be a challenge.

“It’s [government funding] trending upward in the area of electronic surveillance, for sure,” Abaco’s Henson says. “As new techniques enable greater information to be obtained, and as our adversaries become more advanced, our capabilities must evolve that much more rapidly.”

Read on militaryembedded.com

Previous
Previous

AUSA and the week full of firsts

Next
Next

Shipboard electronics evolve to match the pace of threat