Endurance and Altitude for Propeller-driven Aircraft

Lower altitude = higher endurance?

At first glance this didn’t make sense to me. Air Force pilots will be the first to tell you that you don’t hug the earth en route to maximize your time on station, it is harder slogging in thicker air. At higher altitudes you have lower air pressure and colder air temperatures, low pressure is bad for thrust but low temperatures are more favorable. How does this all shake out? And what are the differences between jet- and prop-driven aircraft?

My initial hunch was that for jets, higher altitude requires more speed for level flight and for every gram of fuel burned, you would cover more unit of range in more rarefied air, but endurance is really about turbine efficiency. Higher altitudes decrease the mass flow through the engine, but engine cycle performance is mostly correlated with the ratio of maximum temperature achievable in the turbine $T_{max}$ to ambient temperatures. $T_{max}$ is fixed by engine material properties so the main way to increase efficiency with altitude is to decrease your ambient temperature. With a standard day, the beginning of the isotherm is around 36,089 feet where temperatures remains close to -70 F for about 30k more feet. At 70kft, the outside temperature actually begins to increase. Pressure decreases at a rate that is not subjected to the same complex interplay between the sun’s incoming radiation and the earths outbound radiation, which means any altitude above 36kft should really not give you a performance advantage.

However, modern high endurance unmanned platforms are propeller-driven aircraft. While I haven’t done to math to show how much more efficient these are when compared to jets, I want to explore particularly how the efficiency of propeller-driven aircraft is affected by altitude.

All aircraft share common interactions with the air, so I had to start with the basics of the airfoil. The lift coefficient is traditionally defined as the ratio of force is that pulling an aircraft up over resistive forces. Dynamic pressure (the difference between the stagnation and static pressures) is simply the pressure due to the motion of air particles. When applied against a given area, it is the force that must be overcome by lift for upward motion. If $L$ is the the lift force, and $q$ is the dynamic pressure applied across a planform area $A$ for an air density of $\rho$, by rearranging the lift equation, we have,

$$ C_L = \frac{L}{q_{\infty}\,A}=\frac{2\,L}{\rho\,v^2 A}. $$

Solving for velocity gives us our first insight on how altitude is going to impact the performance of any airfoil:

$$ v = \sqrt{\frac{2\,L}{A}}\,\sqrt{\frac{1}{\rho}}. $$

since, lower altitudes require lower velocity to generate the same lift force. But how does lower velocity equate to more endurance?

Using climatic data from MIL-STD-210C under the conservative assumption of a high density which occurs 20{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} of the time we have a basic 1/x relationship with density decreasing dramatically with altitude.

From, this alone, we can then plot the velocity needed to keep $C_L$ constant.

To understand the impact this has on endurance, we have to look at brake specific fuel consumption (BSFC). This is a measure of fuel efficiency within a shaft reciprocating engine. It simply the rate of fuel consumption divided by the power produced and can also be considered a as power-specific fuel consumption. If we consume a given number of grams per second of fuel at rate, $r$ for a given power, $P$,

$$ \text{BFSC} = \frac{r}{P} = \frac{r}{\tau \, \omega}, $$

with $\tau$ as engine torque $N·m$ and ($\omega$) as engine speed (rad/s).

$BFSC$ varies from complex interactions with engine speed and pressure:

To understand how this affects endurance, let’s just consider how much fuel we have divided by the rate we use it. Because fuel consumption for piston engines is proportional to power output, we use the average value method to predict the endurance of a propeller-driven aircraft. This is very simply looking at endurance as the fuel weight you have divided by the rate that fuel is spent, all the way to zero fuel. If $W_f$ is the weight of fuel available to be burned:

$$ E = \frac{\Delta W_f}{\dot W_f} = \frac{\Delta W_f}{(\text{BSFC}/\eta_{prop})\,D_{avg}\,V_{\infty}}, $$

where $\eta_{prop}$ is the propeller efficiency factor and $V_{\infty}$ is the speed of the air in the free-stream. The speed for maximum endurance needs to consider flight at the minimum power available,

and at this power, we should be able to maximize our endurance at the appropriate velocity. For increased accuracy, let’s consider that fuel is burned continuously,
$$ E = \int_{W_2}^{W_1} \frac{\eta_{prop}}{\text{BSFC}} \frac{dW}{D\,V_{\infty}}. $$

If $C_L$ is constant and $W=L$, then we can substitute $V_{\infty} = \sqrt{2W/(\rho\,A\,f\,C_L)}$,

$$ E = \frac{\eta_{prop}\,C_L^{3/2}}{\text{BSFC}\,C_D} \sqrt{\frac{\rho,A}{2}} \int_{W_2}^{W_1} \frac{dW}{W^{3/2}} = \frac{\eta_{prop}\,C_L^{3/2}}{\text{BSFC}\,C_D} \sqrt{2 \rho A} \left( W_2^{-1/2} – W_1^{1/2} \right)$$

This tells us that if we want maximum endurance, we want high propeller efficiency, low BSFC, high density (both low altitude and temperature) high amount (weight) of fuel available, and a maximum value of the ratio of $C_L^{3/2}/C_D$. Naturally, the higher density is going to directly assist, but the coefficients of lift over drag is maximized when we minimize the power output required,

$$ P_R = V_{\infty} \, D = V_{\infty} , \frac{W}{C_L / C_D} = \sqrt{\frac{2\,W^3}{\rho S}}\frac{1}{C_L^{3/2}/C_D} = \frac{\text{constant}\,C_D}{C_L^{3/2}} $$

So we want to minimize $P_R$. For an assumed drag polar, the condition for minimizing $C_D/C_L^{3/2}$ is found by expressing the ratio in terms of the drag polar, taking the derivative, and setting it equal to zero:

$$ 3 C_{D_0}=k C_L^2 $$

This has the interesting result that the velocity which results in minimum power required has an induced drag three times higher than parasitic drag.

In all, it seems that platform endurance is a function of a number of design parameters and the air density, so in general the higher the air density, the higher the endurance. Please comment to improve my thoughts on this.

Weaponizing the Weather

“Intervention in atmospheric and climatic matters . . . will unfold on a scale difficult to imagine at present. . . . this will merge each nation’s affairs with those of every other, more thoroughly than the threat of a nuclear or any other war would have done.” — J. von Neumann

Disclaimer: This is just me exploring a topic that I’m generally clueless on, explicitly because I’m clueless on it. My views and the research discussed here has nothing to do with my work for the DoD.

Why do we care?

Attempting to control the weather is older than science itself. While it is common today to perform cloud seeding to increase rain or snow, weather modification has the potential to prevent damaging weather from occurring; or to provoke damaging weather as a tactic of military or economic warfare. This scares all of us, including the UN who banned weather modification for the purposes of warfare in response to US actions in Vietnam to induce rain and extend the East Asian monsoon season (see operation popeye). Unfortunately, this hasn’t stopped Russia and China from pursuing active weather modification programs with China generally regarded as the largest and most active. While Russia is famous for sophisticated cloud-seeding in 1986 to prevent radioactive rain from the Chernobyl reactor accident from reaching Moscow, see China Leads the Weather Control Race and China plans to halt rain for Olympics to understand the extent of China’s efforts in this area.

The Chinese have been tinkering with the weather since the late 1950s, trying to bring rains to the desert terrain of the northern provinces. Their bureau of weather modification was established in the 1980s and is now believed to be the largest in the world. It has a reserve army of 37,000 people, which might sound like a lot, until we consider the scale of an average storm. The numbers that describe weather are big. At any instant there are approximately 2,000 thunderstorms and every day there are 45,000 thunderstorms, which contain some combination of heavy rain, hail, microbursts, wind shear, and lightning. The energy involved is staggering: a tropical storm can have an energy equal to 10,000 one-megaton hydrogen bombs. A single cloud contains about a million pounds of water so a mid-size storm would contain about 3 billion pounds of water. If anyone ever figures out how to control all this mass and energy they would make an excellent bond villain.

The US government has conducted research in weather modification as well. In 1970, then ARPA Director Stephen J. Lukasik told the Senate Appropriations Committee: “Since it now appears highly probable that major world powers have the ability to create modifications of climate that might be seriously detrimental to the security of this country, Nile Blue [a computer simulation] was established in FY 70 to achieve a US capability to (1) evaluate all consequences of of a variety of possible actions … (2) detect trends in in the global circulation which foretell changes … and (3) determine if possible , means to counter potentially deleterious climatic changes … What this means is learning how much you have to tickle the atmosphere to perturb the earth’s climate.” Sounds like a reasonable program for the time.

Military applications are easy to think up. If you could create a localized could layer, you could decrease performance of ground and airborne IRSTs particularly in the long-wave. (Cloud mean-diameter is typically 10 to 15 microns.) You could send hurricanes toward your adversary or increase the impact of an all weather advantage. (Sweet.) You could also do more subtle effects such as inuring the atmosphere towards your communications technology or degrade the environment to a state less optimal for an adversary’s communications or sensors. Another key advantage would be to make the environment unpredictable. Future ground-based sensing and fusing architectures such as multi-static and passive radars rely on a correctly characterized environment that could be impacted by both intense and unpredictable weather.

Aside from military uses, climate change (both perception and fact) may drive some nations to seek engineered solutions. Commercial interests would welcome the chance to make money cleaning up the mess they made money making. And how are we going to sort out and regulate that without options and deep understanding? Many of these proposals could have dual civilian and military purposes as they originate in Cold War technologies. As the science advances, will we be able to prevent their renewed use as weapons? Could the future hold climatological conflicts, just as we’ve seen cyber warfare used to presage invasion as recently seen between Ukraine and Russia? If so, climate influence would be a way for a large state to exert an influence on smaller states.

Considering all of this, it would be prudent to have a national security policy that accounts for weather modification and manipulation. Solar radiation management, called albedo modification, is considered to be a potential option for addressing climate change and one that may get increased attention. There are many research opportunities that would allow the scientific community to learn more about the risks and benefits of albedo modification, knowledge which could better inform societal decisions without imposing the risks associated with large-scale deployment. According to Carbon Dioxide Removal and Reliable Sequestration (2015) by the National Academy of Sciences, there are several hypothetical, but plausible, scenarios under which this information would be useful. They claim (quoting them verbatim):

  1. If, despite mitigation and adaptation, the impacts of climate change still become intolerable (e.g., massive crop failures throughout the tropics), society would face very tough choices regarding whether and how to deploy albedo modification until such time as mitigation, carbon dioxide removal, and adaptation actions could significantly reduce the impacts of climate change.
  2. The international community might consider a gradual phase-in of albedo modification to a level expected to create a detectable modification of Earth’s climate, as a large-scale field trial aimed at gaining experience with albedo modification in case it needs to be scaled up in response to a climate emergency. This might be considered as part of a portfolio of actions to reduce the risks of climate change.
  3. If an unsanctioned act of albedo modification were to occur, scientific research would be needed to understand how best to detect and quantify the act and its consequences and impacts.

What has been done in the past?

Weather modification was limited to magic and prayers until the 18th century when hail cannons were fired into the air to break up storms. There is still an industrial base today if you would like to have your own hail cannon. Just don’t move in next door if you plan on practicing.

(Not so useful) Hail Cannons

Despite their use on a large scale, there is no evidence in favor of the effectiveness of these devices. A 2006 review by Jon Wieringa and Iwan Holleman in the journal Meteorologische Zeitschrift summarized a variety of negative and inconclusive scientific measurements, concluding that “the use of cannons or explosive rockets is waste of money and effort”. In the 1950s to 1960s, Wilhelm Reich performed cloudbusting experiments, the results of which are controversial and not widely accepted by mainstream science.

However, during the cold war the US government committed to a ambitious experimental program named Project Stormfury for nearly 20 years (1962 to 1983). The DoD and NOAA attempted to weaken tropical cyclones by flying aircraft into them and seeding them with silver iodide. The proposed modification technique involved artificial stimulation of convection outside the eye wall through seeding with silver iodide. The artificially invigorated convection, it was argued, would compete with the convection in the original eye wall, lead to reformation of the eye wall at larger radius, and thus produce a decrease in the maximum wind. Since a hurricane’s destructive potential increases rapidly as its maximum wind becomes stronger, a reduction as small as 10{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} would have been worthwhile. Modification was attempted in four hurricanes on eight different days. On four of these days, the winds decreased by between 10 and 30{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}. The lack of response on the other days was interpreted to be the result of faulty execution of the experiment or poorly selected subjects.

These promising results have, however, come into question because recent observations of unmodified hurricanes indicate: I) that cloud seeding has little prospect of success because hurricanes contain too much natural ice and too little super cooled water, and 2) that the positive results inferred from the seeding experiments in the 1960s probably stemmed from inability to discriminate between the expected effect of human intervention and the natural behavior of hurricanes. The legacy of this program is the large global infrastructure today that routinely flies to inject silver iodide to cause localized rain with over 40 countries actively seeding clouds to control rainfall. Unfortunately, we are still pretty much helpless in the face of a large hurricane.

That doesn’t mean the Chinese aren’t trying. In 2008, China assigned 30 airplanes, 4,000 rocket launchers, and 7,000 anti-aircraft guns in an attempt to stop rain from disrupting the 2008 Olympics by shooting various chemicals into the air at any threatening clouds in the hopes of shrinking rain drops before they reached the stadium. Due to the difficulty of conducting controlled experiments at this scale, there is no way to know if this was effective. (Yes, this is the country that routinely bulldozes entire mountain ranges to make economic regions.)

But the Chinese aren’t the only ones. In January, 2011, several newspapers and magazines, including the UK’s Sunday Times and Arabian Business, reported that scientists backed by Abu Dhabi had created over 50 artificial rainstorms between July and August 2010 near Al Ain. The artificial rainstorms were said to have sometimes caused hail, gales and thunderstorms, baffling local residents. The scientists reportedly used ionizers to create the rainstorms, and although the results are disputed, the large number of times it is recorded to have rained right after the ionizers were switched on during a usually dry season is encouraging to those who support the experiment.

While we would have to understand the technology very well first and have a good risk mitigation strategy, I think there are several promising technical areas that merit further research.

What are the technical approaches?

So while past experiments are hard to learn much from and far from providing the buttons to control the weather, there are some promising technologies I’m going to be watching. There are five different technical approaches I was able to find:

  1. Altering the available solar energy by introducing materials to absorb or reflect sunshine
  2. Adding heat to the atmosphere by artificial means from the surface
  3. Altering air motion by artificial means
  4. Influencing the humidity by increasing or retarding evaporation
  5. Changing the processes by which clouds form and causing precipitation by using chemicals or inserting additional water into the clouds

In these five areas, I see several technical applications that are both interesting and have some degree of potential utility.

Modeling

Below is the 23-year accuracy of the U.S. GFS, the European ECMWF, the U.K. Government’s UKMET, and a model called CDAS which has never been modified, to serve as a “constant.” As you would expect, model accuracy is gradually increasing (1.0 is 100{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} accurate). Weather models are limited by computation and the scale of input data: for a fixed amount of computing power, the smaller the grid (and more accurate the prediction), the smaller time horizon for predictions. As more sensors are added and fused together, accuracy will keep improving.

Weather requires satellite and radar imagery that are truly on the very small scale. Current accuracy is an effective observation spacing of around 5 km. Radar data are only available to a fairly short distance from the coast. Satellite wind measurements can only resolve detail on about a 25 km scale. Over land, data from radar can be used to help predict small scale and short lived detail.

Weather Model Accuracy over Time
Weather Model Accuracy over Time

Modeling is important, because understanding is necessary for control. With increased accuracy, we can understand weather’s leverage points and feedback loops. This knowledge is important, because increased understanding would enable applying the least amount of energy where it matters most. Interacting with weather on a macro scale is both cost prohibitive and extremely complex.

Ionospheric Augmentation

Over the horizon radars (commonly called OTHR) have the potential to see targets hundreds of miles away because they aren’t limited by their line of sight like conventional microwave radars. They accomplish this by bouncing off the horizon, but this requires a sufficiently dense ionosphere that isn’t always there. Since the ionosphere is ionized by solar radiation, the solar radiation is stronger when the earth is more tilted towards the sun in the summer. To compensate for this, artificial ionospheric mirrors could bounce HF signals more consistently and precisely over broader frequencies. Tests have shown that these mirrors could theoretically reflect radio waves with frequencies up to 2 GHz, which is nearly two orders of magnitude higher than waves reflected by the natural ionosphere. This could have significant military applications such as low frequency (LF) communications, HF ducted communications, and increased OTHR performance.

This concept has been described in detail by Paul A. Kossey, et al. in a paper entitled “Artificial Ionospheric Mirrors.” The authors describe how one could precisely control the location and height of the region of artificially produced ionization using crossed microwave beams, which produce atmospheric breakdown. The implications of such control are enormous: one would no longer be subject to the vagaries of the natural ionosphere but would instead have direct control of the propagation environment. Ideally, these artificial mirrors could be rapidly created and then would be maintained only for a brief operational period.

Local Introduction of Clouds

There are several methods for seeding clouds. The best-known dissipation technique for cold fog is to seed it from the air with agents that promote the growth of ice crystals. They include dropping pyrotechnics on lop of existing clouds. penetrating clouds, with pyrotechnics and liquid generators, shooting rockets into clouds, and working from ground-based generators. Silver iodide is frequently used lo cause precipitation, and effects usually are seen in about thirty minutes. Limited success has been noted in fog dispersal and improving local visibility through introduction of hygroscopic substances.

However, all of these techniques seem like a very inexact science and 30 minutes remains far from the timescales needed for clouds on demand. From my brief look at it, we are just poking around in cloud formation. For the local introduction of clouds to be useful in military applications, there have to be a suite of techniques robust to changing weather. More research in this area might be able to effect chain reactions to cause massive cloud formations. Real research in this area could help it emerge from pseudo-science. There is a lot of it in this area. This Atlantic article titled Dr. Wilhelm Reich’s Orgasm-Powered Cloudbuster is pretty amusing and pretty indicative of the genre.

A cloud gun that taps into an “omnipresent libidinal life force responsible for gravity, weather patterns, emotions, and health”

Fog Removal

Ok, so no-one can make clouds appear on demand in a wide range of environments, but is technology better when it comes to removing fog? The best-known dissipation technique is heating because a small temperature increase is usually sufficient to evaporate fog. Since heating over a very wide scale usually isn’t practical, the next most effective technique is hygroscopic seeding. Hygroscopic seeding uses agents that absorb water vapor. This technique is most effective when accomplished from the air but can also be accomplished from the ground. Optimal results require advance information on fog depth, liquid water content, and wind.

In the 20th century several methods have been proposed to dissipate fog. One of them is to burn fuel along the runway, heat the fog layer and evaporate droplets. It has been used in Great Britain during World War II to allow British bombers returning from Germany to land safely in fog conditions. Helicopters can dissipate fog by flying slowly across the top surface and mix warm dry air into the fog. The downwash action of the rotors forces air from above into the fog, where it mixes, producing lower humidity and causing the fog droplets to evaporate. Tests were carried out in Florida and Virginia, and in both places cleared areas were produced in the helicopter wakes. Seeding with polyelectrolytes causes electric charges to develop on drops and has been shown to cause drops to coalesce and fallout. Other techniques that have been tried include the use of high-frequency (ultrasonic) vibrations, heating with laser rays and seeding with carbon black to alter the radiative properties1.

However, experiments have confirmed that large-scale fog removal would require exceeding the power density exposure limit of $100 \frac{\text{watt}}{m^2}$ and would be very expensive. Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. This doesn’t mean that capability on a smaller scale isn’t possible. Generating $1 \frac{\text{watt}}{cm^2}$, which is approximately the US large power density exposure limit, raised visibility to one quarter of a mile in 20 seconds. Most efforts have been made on attempts to increase the runway visibility range on airports, since airline companies face millions of dollars loss every year due to fog appearance on the runway. This thesis examines in the issue in depth.

Emerging Enabling Technologies

In looking at this topic, I was able to find several interesting technologies that may develop and make big contributions to weather research.

Carbon Dust

Just as a black tar roof easily absorbs solar energy and subsequently radiates heat during a sunny day, carbon black also readily absorbs solar energy. When dispersed in microscopic form in the air over a large body of water, the carbon becomes hot and heats the surrounding air, thereby increasing the amount of evaporation from the water below. As the surrounding air heats up, parcels of air will rise and the water vapor contained in the rising air parcel will eventually condense to form clouds. Over time the cloud droplets increase in size as more and more water vapor condenses, and eventually they become too large and heavy to stay suspended and will fall as rain. This technology has the potential to trigger localized flooding and bog down troops and their equipment.

Nanotech

Want to think outside the box? Smart materials based on nanotechnology are currently being developed with processing capability. They could adjust their size to optimal dimensions for a given fog seeding situation and even make continual adjustments. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. If successful, they will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and could also change their temperature and polarity to improve their seeding effects.

If we combine this with high fidelity models, things can get very interesting. If we can model and understand the leverage points of a weather system, nano-clouds may be able to have an dramatic effect. Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could mimic the signatures of specific weather patterns if tailored to the parameters of weather models.

High power lasers

The development of directed radiant energy technologies, such as microwaves and lasers, could provide new possibilities. Everyone should hate firing rockets and chemicals into the atmosphere. The advent of ultrashort laser pulses and the discovery of self-guided ionized filaments (see Braun et al., 1985) might provide the opportunity. Jean-Pierre Wolf has used using ultrashort laser pulses to create lightning and cue cloud formation. Prof Wolf says, “We did it on a laboratory scale, we can already create clouds, but not on a macroscopic scale, so you don’t see a big cloud coming out because the laser is not powerful enough and because of a lot of technical parameters that we can’t yet control,” from this cnn article.

What now?

So we have all the elements of scientific discipline and could use a national strategy in this area that includes the ethics, policy, technology and military employment doctrine. The military and civilian community already invests heavily in sensors and modeling of weather effects. These should be coupled with feasible excitation mechanisms to create a tight innovation loop. Again, this area is sensitive and politically charged, but there is a clear need to pull together sensors, processing capability and excitation mechanisms to ensure we have the right responses and capabilities. With such a dubious and inconclusive past, is there a potential future for weather modification? I think we have a responsibility for pursing knowledge even in areas where the ethical boundaries are not well established. Ignorance is never a good strategy. Just because we might open Pandora’s box, doesn’t mean that a less morally responsible nation or group won’t get there first. We can always abstain from learning a new technology, but if we are caught by surprise, we won’t have the knowledge to develop a good counter-strategy.

References

  1. http://csat.au.af.mil/2025/volume3/vol3ch15.pdf
  2. http://www.wired.com/2009/12/military-science-hack-stormy-skies-to-lord-over-lightning/
  3. PROSPECTS FOR WEATHER MODIFICATION

  1. DUMBAI, MA, et al. “ORGANIC HEAT-TRANSFER AGENTS IN CHEMICAL-INDUSTRY.” KHIMICHESKAYA PROMYSHLENNOST 1 (1990): 10-15. 

Playing with Matched Filters

During my time on the red team, we continually discussed the role of matched filters in everything from GPS to fire control radars. While I’m having a blast at DARPA where I work in cyber, I wanted to review an old topic and put MATLAB’s phased array toolbox to the test. (Yes, radar friends this is basic stuff. I’m mostly writing this to refresh my memory and remember how to code. Maybe a fellow-manager might find this review helpful, but if you are in this field, there won’t be anything interesting or new below.)

Why use matched filters?

Few things are more fundamental to RADAR performance than the fact that probability of detection increases with increasing signal to noise ratio (SNR). For a deterministic signal in white Gaussian noise (a good assumption as any regarding background noise but the noise does not need to be Gaussian for a matched filter to work), the SNR can be maximized at the receiver by using a filter matched to the signal.

One thing that always confused me about matched filters was that they really aren’t a type of filter, but more of a framework that aims to reduce the effect of noise which results in a higher signal to noise ratio. One way I’ve heard this described is that the matched filter is a time-reversed and conjugated version of the signal.

The math helps to understand what is going on here. In particular, I want to derive that the peak instantaneous signal power divided by the average noise power at the output of a matched filter is equal to twice the input signal energy divided by the input noise power, regardless of the waveform used by the radar.

Suppose we have some signal $r(t) = s(t) + n(t)$ where $n(t)$ is the noise and $s(t)$ is the signal. The signal is finite, with duration $T$ and let’s assume the noise is white gaussian noise with spectral height $N_0/2$. If the aggregated signal is input into a filter with impulse response $h(t)$ and the resultant output is $y(t)$ you can write the signal and noise outputs ($y_s$ and $y_n$) in the time domain:

$$ y_s(t) = \int_0^t s(u) h(t-u)\,du $$
$$ y_n(t) = \int_0^t n(u) h(t-u)\,du $$

Since we want to minimize the SNR, we expand the above:

$$\text{SNR} = \frac{y_s^2(t)}{E\left[y_n^2(t) \right]}$$
$$ = \frac{ \left[ \int_0^t s(u) h(t-u)\,du \right]^2}{\text{E}\left[ \int_0^t n(u) h(t-u)\,du \right]^2}$$

The denominator can be expanded:

$$\text{E} \left[y_n^2(t) \right] = \left[ \int_0^t n(u) h(t-u)\,du \int_0^t n(v) h(t-v)\,dv \right] $$

Or

$$ \int_0^t \int_0^t E [ n(u) n(v) ] h(t-u) h(t-v) du\,dv $$

We can further simplify this by invoking a standard white noise model:

$$ E[y_n^2] = \frac{N_0}{2} \int_0^t \int_0^t \delta(u-v) h(t-u) h(t – v) du\,dv $$

Which simplifies nicely to:

$$ \frac{N_0}{2} \int_0^t h^2 (t – u) du $$

Now all together we get:

$$ SNR = \frac{ \left[ \int_0^t s(u) h(t-u)\,du \right]^2 }{\frac{N_0}{2} \int_0^t h^2 (t – u) du } $$

In order to further simplify, we employ the Cauchy-Schwarz Inequality which says, for any two points (say $A$ and $B$) in a Hilbert space,

$$ \langle A, B \rangle^2 \leq |A|^2 |B|^2 \text{,}$$

and is only equal when $A = k\,B$ where $k$ is a constant. Applying this, we can then look at the numerator:

$$ \left| \int_0^t s(u)\,q(u) du \right|^2 \leq \int_0^t s^2(u) du \int_0^t q^2(u) du $$

and equality is acheived when $k\,s(u) = q(u)$.

If we pick $h(t-u)$ to be equal to $k\,s(u)$, we can write our optimal SNR as:

$$ SNR^{\text{opt}} (t) = \frac{k \left[ \, \int_0^t s^2 (u) duN \right]^2 }
{
\frac{N_0 k^2}{2}
\int_0^t s^2(u) du
} = \frac{\int_0^t s^2(u) du
}{
N_0/2
}$$

Since $s(t)$ always has a finite duration $T$, then SNR is maximized by setting $t=T$ which provides the well known formula:
$$SNR^{\text{opt}} = \frac{\int_0^T s^2(u) du}{N_0/2} = \frac{2 \epsilon}{N_0}$$

So, what can we do with matched filters?

Let’s look at an example that compares the results of matched filtering with and without spectrum weighting. (Spectrum weighting is often used with linear FM waveforms to reduce sidelobes.)

The most simple pulse compression technique I know is simply shifting the frequency linearly throughout the pulse. For those not familiar with pulse compression, a little review might be helpful. One fundamental issue in designing a good radar system is it’s capability to resolve small targets at long ranges with scant separation. This requires high energy, and the easiest way to do that is to transmit a longer pulse with enough energy to detect a small target at long range. However, a long pulse degrades range resolution. We can have our cake and eat it to if we encode a frequency change in the longer pulse. Hence, frequency or phase modulation of the signal is used to achieve a high range resolution when a long pulse is required.

The capabilities of short-pulse and high range resolution radar are significant. For example, high range resolution allows resolving more than one target with good accuracy in range without using angle information. Many other applications of short-pulse and high range resolution radar are clutter reduction, glint reduction, multipath resolution, target classification, and Doppler
tolerance.

The LFM pulse in particular has the advantage of greater bandwidth while keeping the pulse duration short and envelope constant. A constant envelope LFM pulse has an ambiguity function similar to that of the square pulse, except that it is skewed in the delay-Doppler plane. Slight Doppler mismatches for the LFM pulse do not change the general shape of the pulse and reduce the amplitude very little, but they do appear to shift the pulse in time.

Before going forward, I wanted to establish the math of an LFM pulse. With a center frequency of $f_0$ and chirp slope $b$, we have a simple expression for the intra-pulse frequency shift:

$$
\phi (t) = f_0 \, t + b\,t^2
$$

If you take the derivative of the phase function, instantaneous frequency is:

$$ \omega_i (t) = f_0 + 2\,b\,t. $$

For a chirp pulse width in the interval $[0, T_p]$, $\omega_i(0) = f_0$ is the minimum frequency and $\omega_i(T_P) = f_0 + 2b\,T_P$ is the maximum frequency. The sweep bandwidth is then $2\,bT_p$ and if the unit pulse is $u(t)$ a single pulse could be described as:

$$ S(t) = u(t) e^{j 2 \pi (f_0 t + b t^2)} \text{.}$$

I learn by doing, so I created a linear FM waveform with a duration of 0.1 milliseconds, a sweep bandwidth of 100 kHz, and a pulse repetition frequency of 5 kHz. Then, we will add noise to the linear FM pulse and filter the noisy signal using a matched filter. We will then observe how the matched filter works with and without spectrum weighting.

Which produces the following chirped pulse,

lfm1

From here, we create two matched filters: one with no spectrum weighting and one with a taylor window. We can see then see the signal input and the matched filter output:

matched_filter_in_and_out

To really see how this works we need to add some noise:

{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} Create the signal and add noise.
sig = step(hwav);
rng(17)
x = sig+0.5*(randn(length(sig),1)+1j*randn(length(sig),1));

And we can see the impact noise has on the original signal:

input_plus_noise

and the final output (both with and without a Taylor window):

output

The Ambiguity Function

While it is cool to see the matched filter working, my background is more in stochastic modeling and my interest is in the radar ambiguity function — which is a much more comprehensive way to examine the performance of a matched filter. The ambiguity function is a two-dimensional function of time delay and Doppler frequency $\chi(\tau,f)$ showing the distortion of a returned pulse due to the receiver matched filter due to the Doppler shift of the return from a moving target. It is the time response of a filter matched to a given finite energy signal when the signal is received with a delay $\tau$ and a Doppler shift $\nu$ relative to the nominal values expected by the filter, or:

$$
|\chi ( \tau, \nu)| = \left| \int_{-\infty}^{\infty} u(t)u^* (t + \tau) exp(j 2 \pi \nu t) dt \right| \text{.}
$$

What is the ambiguity function of an uncompressed pulse?

For an uncompressed, rectangular, pulse the ambiguity function is relatively simple and symmetric.

ambigFun

What does the ambiguity function look like for the LFM pulse described above?

If we compare two pulses, each with a dutycycle of one (PRF is 20 kHz, and pulsewidth is 50 µs), we can see their differing ambiguity functions:

pulse_comparisons

If we look at the ambiguity function of an LFM pulse with the following properties:

SampleRate: 200000
        PulseWidth: 5e-05
               PRF: 10000
    SweepBandwidth: 100000
    SweepDirection: 'Up'
     SweepInterval: 'Positive'
          Envelope: 'Rectangular'
      OutputFormat: 'Pulses'
         NumPulses: 5

then we can see how complex the surface is:

ambig funct 3d
.

References

  • http://www.ece.gatech.edu/research/labs/sarl/tutorials/ECE4606/14-MatchedFilter.pdf
  • Matlab help files