Satellite Imagery for Everyone - IEEE Spectrum

2022-08-12 23:09:54 By : Ms. Smile Wang

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Here’s how you can order up a high-resolution image of any place on Earth

Every day, satellites circling overhead capture trillions of pixels of high-resolution imagery of the surface below. In the past, this kind of information was mostly reserved for specialists in government or the military. But these days, almost anyone can use it.

That’s because the cost of sending payloads, including imaging satellites, into orbit has dropped drastically. High-resolution satellite images, which used to cost tens of thousands of dollars, now can be had for the price of a cup of coffee.

What’s more, with the recent advances in artificial intelligence, companies can more easily extract the information they need from huge digital data sets, including ones composed of satellite images. Using such images to make business decisions on the fly might seem like science fiction, but it is already happening within some industries.

These underwater sand dunes adorn the seafloor between Andros Island and the Exuma islands in the Bahamas. The turquoise to the right reflects a shallow carbonate bank, while the dark blue to the left marks the edge of a local deep called Tongue of the Ocean. This image was captured in April 2020 using the Moderate Resolution Imaging Spectroradiometer on NASA’s Terra satellite.

Joshua Stevens/NASA Earth Observatory

Here’s a brief overview of how you, too, can access this kind of information and use it to your advantage. But before you’ll be able to do that effectively, you need to learn a little about how modern satellite imagery works.

The orbits of Earth-observation satellites generally fall into one of two categories: GEO and LEO. The former is shorthand for geosynchronous equatorial orbit. GEO satellites are positioned roughly 36,000 kilometers above the equator, where they circle in sync with Earth’s rotation. Viewed from the ground, these satellites appear to be stationary, in the sense that their bearing and elevation remain constant. That’s why GEO is said to be a geostationary orbit.

Such orbits are, of course, great for communications relays—it’s what allows people to mount satellite-TV dishes on their houses in a fixed orientation. But GEO satellites are also appropriate when you want to monitor some region of Earth by capturing images over time. Because the satellites are so high up, the resolution of that imagery is quite coarse, however. So these orbits are primarily used for observation satellites designed to track changing weather conditions over broad areas.

Being stationary with respect to Earth means that GEO satellites are always within range of a downlink station, so they can send data back to Earth in minutes. This allows them to alert people to changes in weather patterns almost in real time. Most of this kind of data is made available for free by the U.S. National Oceanographic and Atmospheric Administration.

In March 2021, the container ship Ever Given ran aground, blocking the Suez Canal for six days. This satellite image of the scene, obtained using synthetic-aperture radar, shows the kind resolution that is possible with this technology.

The other option is LEO, which stands for low Earth orbit. Satellites placed in LEO are much closer to the ground, which allows them to obtain higher-resolution images. And the lower you can go, the better the resolution you can get. The company Planet, for example, increased the resolution of its recently completed satellite constellation, SkySat, from 72 centimeters per pixel to just 50 cm—an incredible feat—by lowering the orbits its satellites follow from 500 to 450 km and improving the image processing.

The best commercially available spatial resolution for optical imagery is 25 cm, which means that one pixel represents a 25-by-25-cm area on the ground—roughly the size of your laptop. A handful of companies capture data with 25-cm to 1-meter resolution, which is considered high to very high resolution in this industry. Some of these companies also offer data from 1- to 5-meter resolution, considered medium to high resolution. Finally, several government programs have made optical data available at 10-, 15-, 30-, and 250-meter resolutions for free with open data programs. These include NASA/U.S. Geological Survey Landsat, NASA MODIS (Moderate Resolution Imaging Spectroradiometer), and ESA Copernicus. This imagery is considered low resolution.

Because the satellites that provide the highest-resolution images are in the lowest orbits, they sense less area at once. To cover the entire planet, a satellite can be placed in a polar orbit, which takes it from pole to pole. As it travels, Earth rotates under it, so on its next pass, it will be above a different part of Earth.

Many of these satellites don’t pass directly over the poles, though. Instead, they are placed in a near-polar orbit that has been specially designed to take advantage of a subtle bit of physics. You see, the spinning Earth bulges outward slightly at the equator. That extra mass causes the orbits of satellites that are not in polar orbits to shift or (technically speaking) to precess. Satellite operators often take advantage of this phenomenon to put a satellite in what’s called a sun-synchronous orbit. Such orbits allow the repeated passes of the satellite over a given spot to take place at the same time of day. Not having the pattern of shadows shift between passes helps the people using these images to detect changes.

On 15 January of this year, an immensely powerful volcanic eruption rocked an uninhabited island in the South Pacific known as Hunga Tonga-Hunga Ha’apai [top]. The massive eruption, which had far-reaching effects, was captured by NOAA's Geostationary Operational Environmental Satellite 17 [bottom].

Top: SkySat/Planet; Bottom: Joshua Stevens and Lauren Dauphin/NOAA/NESDIS/NASA

It usually takes 24 hours for a satellite in polar orbit to survey the entire surface of Earth. To image the whole world more frequently, satellite companies use multiple satellites, all equipped with the same sensor and following different orbits. In this way, these companies can provide more frequently updated images of a given location. For example, Maxar’s Worldview Legion constellation, launching later this year, includes six satellites.

After a satellite captures some number of images, all that data needs to be sent down to Earth and processed. The time required for that varies.

DigitalGlobe (which Maxar acquired in 2017) recently announced that it had managed to send data from a satellite down to a ground station and then store it in the cloud in less than a minute. That was possible because the image sent back was of the parking lot of the ground station, so the satellite didn’t have to travel between the collection point and where it had to be to do the data “dumping,” as this process is called.

In general, Earth-observation satellites in LEO don’t capture imagery all the time—they do that only when they are above an area of special interest. That’s because these satellites are limited to how much data they can send at one time. Typically, they can transmit data for only 10 minutes or so before they get out of range of a ground station. And they cannot record more data than they’ll have time to dump.

Currently, ground stations are located mostly near the poles, the most visited areas in polar orbits. But we can soon expect distances to the nearest ground station to shorten because both Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world. As it turns out, hosting the terabytes of satellite data that are collected daily is big business for these companies, which sell their cloud services (Amazon Web Services and Microsoft’s Azure) to satellite operators.

For now, if you are looking for imagery of an area far from a ground station, expect a significant delay—maybe hours—between capture and transmission of the data. The data will then have to be processed, which adds yet more time. The fastest providers currently make their data available within 48 hours of capture, but not all can manage that. While it is possible, under ideal weather conditions, for a commercial entity to request a new capture and get the data it needs delivered the same week, such quick turnaround times are still considered cutting edge.

The best commercially available spatial resolution is 25 centimeters for optical imagery, which means that one pixel represents something roughly the size of your laptop.

I’ve been using the word “imagery,” but it’s important to note that satellites do not capture images the same way ordinary cameras do. The optical sensors in satellites are calibrated to measure reflectance over specific bands of the electromagnetic spectrum. This could mean they record how much red, green, and blue light is reflected from different parts of the ground. The satellite operator will then apply a variety of adjustments to correct colors, combine adjacent images, and account for parallax, forming what’s called a true-color composite image, which looks pretty much like what you would expect to get from a good camera floating high in the sky and pointed directly down.

Imaging satellites can also capture data outside of the visible-light spectrum. The near-infrared band is widely used in agriculture, for example, because these images help farmers gauge the health of their crops. This band can also be used to detect soil moisture and a variety of other ground features that would otherwise be hard to determine.

Longer-wavelength “thermal” IR does a good job of penetrating smoke and picking up heat sources, making it useful for wildfire monitoring. And synthetic-aperture radar satellites, which I discuss in greater detail below, are becoming more common because the images they produce aren’t affected by clouds and don’t require the sun for illumination.

You might wonder whether aerial imagery, say, from a drone, wouldn’t work at least as well as satellite data. Sometimes it can. But for many situations, using satellites is the better strategy. Satellites can capture imagery over areas that would be difficult to access otherwise because of their remoteness, for example. Or there could be other sorts of accessibility issues: The area of interest could be in a conflict zone, on private land, or in another place that planes or drones cannot overfly.

So with satellites, organizations can easily monitor the changes taking place at various far-flung locations. Satellite imagery allows pipeline operators, for instance, to quickly identify incursions into their right-of-way zones. The company can then take steps to prevent a disastrous incident, such as someone puncturing a gas pipeline while construction is taking place nearby.

This SkySat image shows the effect of a devastating landslide that took place on 30 December 2020. Debris from that landslide destroyed buildings and killed 10 people in the Norwegian village of Ask.

The ability to compare archived imagery with recently acquired data has helped a variety of industries. For example, insurance companies sometimes use satellite data to detect fraudulent claims (“Looks like your house had a damaged roof when you bought it…”). And financial-investment firms use satellite imagery to evaluate such things as retailers’ future profits based on parking-lot fullness or to predict crop prices before farmers report their yields for the season.

Satellite imagery provides a particularly useful way to find or monitor the location of undisclosed features or activities. Sarah Parcak of the University of Alabama, for example, uses satellite imagery to locate archaeological sites of interest. 52Impact, a consulting company in the Netherlands, identified undisclosed waste dump sites by training an algorithm to recognize their telltale spectral signature. Satellite imagery has also helped identify illegal fishing activities, fight human trafficking,monitor oil spills, get accurate reporting on COVID-19 deaths, and even investigate Uyghur internment camps in China—all situations where the primary actors couldn’t be trusted to accurately report what’s going on.

Despite these many successes, investigative reporters and nongovernmental organizations aren’t yet using satellite data regularly, perhaps because even the small cost of the imagery is a deterrent. Thankfully, some kinds of low-resolution satellite data can be had for free.

The first place to look for free satellite imagery is the Copernicus Open Access Hub and EarthExplorer. Both offer free access to a wide range of open data. The imagery is lower resolution than what you can purchase, but if the limited resolution meets your needs, why spend money?

If you require medium- or high-resolution data, you might be able to buy it directly from the relevant satellite operator. This field recently went through a period of mergers and acquisitions, leaving only a handful of providers, the big three in the West being Maxar and Planet in the United States and Airbus in Germany. There are also a few large Asian providers, such as SI Imaging Services in South Korea and Twenty First Century Aerospace Technology in Singapore. Most providers have a commercial branch, but they primarily target government buyers. And they often require large minimum purchases, which is unhelpful to companies looking to monitor hundreds of locations or fewer.

Expect the distance to the nearest ground station to shorten because both Amazon and Microsoft have announced intentions to build large networks of ground stations located all over the world.

Fortunately, approaching a satellite operator isn’t the only option. In the past five years, a cottage industry of consultants and local resellers with exclusive deals to service a certain market has sprung up. Aggregators and resellers spend years negotiating contracts with multiple providers so they can offer customers access to data sets at more attractive prices, sometimes for as little as a few dollars per image. Some companies providing geographic information systems—including Esri, L3Harris, and Safe Software—have also negotiated reselling agreements with satellite-image providers.

Traditional resellers are middlemen who will connect you with a salesperson to discuss your needs, obtain quotes from providers on your behalf, and negotiate pricing and priority schedules for image capture and sometimes also for the processing of the data. This is the case for Apollo Mapping, European Space Imaging, Geocento, LandInfo, Satellite Imaging Corp., and many more. The more innovative resellers will give you access to digital platforms where you can check whether an image you need is available from a certain archive and then order it. Examples include LandViewer from EOS and Image Hunter from Apollo Mapping.

More recently, a new crop of aggregators began offering customers the ability to programmatically access Earth-observation data sets. These companies work best for people looking to integrate such data into their own applications or workflows. These include the company I work for, SkyWatch, which provides such a service, called EarthCache. Other examples are UP42 from Airbus and Sentinel Hub from Sinergise.

While you will still need to talk with a sales rep to activate your account—most often to verify you will use the data in ways that fits the company’s terms of service and licensing agreements—once you’ve been granted access to their applications, you will be able to programmatically order archive data from one or multiple providers. SkyWatch is, however, the only aggregator allowing users to programmatically request future data to be collected (“tasking a satellite”).

While satellite imagery is fantastically abundant and easy to access today, two changes are afoot that will expand further what you can do with satellite data: faster revisits and greater use of synthetic-aperture radar (SAR).

Satellite images have helped to reveal China’s treatment of its Muslim Uyghur minority. About a million Uyghurs (and other ethnic minorities) have been interned in prisons or camps like the one shown here [top], which lies to the east of the city of Ürümqi, the capital of China’s Xinjiang Uyghur Autonomous Region. Another satellite image [bottom] shows the characteristic oval shape of a fixed-chimney Bull’s trench kiln, a type widely used for manufacturing bricks in southern Asia. This one is located in Pakistan’s Punjab province. This design poses environmental concerns because of the sooty air pollution it generates, and such kilns have also been associated with human-rights abuses.Top: CNES/Airbus/Google Earth; Bottom: Maxar Technologies/Google Earth

The first of these developments is not surprising. As more Earth-observation satellites are put into orbit, more images will be taken, more often. So how frequently a given area is imaged by a satellite will increase. Right now, that’s typically two or three times a week. Expect the revisit rate soon to become several times a day. This won’t entirely address the challenge of clouds obscuring what you want to view, but it will help.

The second development is more subtle. Data from the two satellites of the European Space Agency’s Sentinel-1 SAR mission, available at no cost, has enabled companies to dabble in SAR over the last few years.

With SAR, the satellite beams radio waves down and measures the return signals bouncing off the surface. It does that continually, and clever processing is used to turn that data into images. The use of radio allows these satellites to see through clouds and to collect measurements day and night. Depending on the radar band that’s employed, SAR imagery can be used to judge material properties, moisture content, precise movements, and elevation.

As more companies get familiar with such data sets, there will no doubt be a growing demand for satellite SAR imagery, which has been widely used by the military since the 1970s. But it’s just now starting to appear in commercial products. You can expect those offerings to grow dramatically, though.

Indeed, a large portion of the money being invested in this industry is currently going to fund large SAR constellations, including those of Capella Space, Iceye, Synspective, XpressSAR, and others. The market is going to get crowded fast, which is great news for customers. It means they will be able to obtain high-resolution SAR images of the place they’re interested in, taken every hour (or less), day or night, cloudy or clear.

People will no doubt figure out wonderful new ways to employ this information, so the more folks who have access to it, the better. This is something my colleagues at SkyWatch and I deeply believe, and it’s why we’ve made it our mission to help democratize access to satellite imagery.

One day in the not-so-distant future, Earth-observation satellite data might become as ubiquitous as GPS, another satellite technology first used only by the military. Imagine, for example, being able to take out your phone and say something like, “Show me this morning’s soil-moisture map for Grover’s Corners High; I want to see whether the baseball fields are still soggy.”

This article appears in the March 2022 print issue as “A Boom with a View.”

Editor's note: The original version of this article incorrectly stated that Maxar's Worldview Legion constellation launched last year.

Dexter Jagula is the cofounder and chief operating officer of SkyWatch, a company dedicated to making satellite imagery accessible.

Human speech and protein structure are close enough for AI purposes

Payal Dhar (she/they) is a freelance journalist on science, technology, and society. They write about AI, cybersecurity, surveillance, space, online communities, games, and any shiny new technology that catches their eye. You can find and DM Payal on Twitter (@payaldhar).

Human languages have much in common with proteins, at least in terms of computational modeling. This has led research teams to apply novel methods from natural-language processing (NLP) to protein design. One of these—Birte Höcker’s protein design lab at Bayreuth University, in Germany—describes ProtGPT2, a language model based on OpenAI’s GPT-2, to generate novel protein sequences based on the principles of natural ones.

Just as letters from the alphabet form words and sentences, naturally occurring amino acids combine in different ways to form proteins. And protein sequences, just like natural languages, store structure and function in their amino-acid sequence with extreme efficiency.

ProtGPT2 is a deep, unsupervised model that takes advantage of advances in transformer architecture that have also caused rapid progress in NLP technologies. The architecture has two modules, explains Noelia Ferruz, a coauthor of the paper and the person who trained ProtGPT2: one module to understand input text, and another that processes or generates new text. It was the second one, the decoder module that generates new text, that went into the development of ProtGPT2.

Researchers have used GPT-2 to train a model to learn the protein “language,” generate stable proteins, and explore “dark” regions of protein space.

“At the time we created this model, there were many others that were using the first module,” she says, such as ESM, ProtTrans, and ProteinBERT. “Ours was the first one publicly released at a time that was a decoder.” It was also the first time someone had directly applied GPT-2, she adds.

Ferruz herself is a big fan of GPT-2. “I find it very impressive that there was a model capable of writing English,” she says. This is a well-known transformer model that was pretrained on 40 gigabytes of Internet text in English in an unsupervised manner—that is, it used raw text with no human labeling—to generate the next word in sentences. The GPT-x series has been shown to efficiently produce long, coherent text, often indistinguishable from something written by a human—to the extent that potential misuse is a concern.

Given the capabilities of GPT-2, the Bayreuth researchers were optimistic about using it to train a model to learn the protein language, generate stable proteins, and also explore “dark” regions of the protein space. Ferruz trained ProtGPT2 on a data set of about 50 million nonannotated sequences across the whole protein space. To evaluate the model, the researchers compared a data set of 10,000 sequences generated by ProtGPT2 with a random set of 10,000 sequences from the training data set.

“We could add labels, and potentially in the future start generating sequences with a specific function.” —Noelia Ferruz, University of Bayreuth, Germany

They found the sequences predicted by the model to be similar in secondary structure to naturally occurring proteins. ProtGPT2 can predict proteins that are stable and functional, although, Ferruz says, this will be verified by laboratory experiments on a set of 30 or so proteins in the coming months. ProtGPT2 also models proteins that do not occur in nature, opening up possibilities in the protein design space.

Each node represents a sequence. Two nodes are linked when they have an alignment of at least 20 amino acids and 70 percent HHsearch probability. Colors depict the different SCOPe classes, and ProtGPT2 sequences are shown in white.

The model can generate millions of proteins in minutes, says Ferruz. “Without further improvements, people could take the model, which is freely available, and fine-tune a set of sequences to produce more sequences in this region,” such as for antibiotics or vaccines. But also, she adds, with small modifications in the training process “we could add labels, and potentially in the future start generating sequences with a specific function.” This in turn has potential for uses in not just medical and biomedical fields but also in environmental sciences and more.

Ferruz acknowledges the rapid developments in the NLP space for the success of ProtGPT2, but also points out that this is an ever-changing space—“It’s crazy, all the things that have happened in the last 12 months.” At the moment, she and her colleagues are already writing a review of their work. “I trained this model over Christmas [2021],” she says, “and at the time, there was another model that had been described...but it wasn’t available.” Yet by this spring, she says, other models had been released.

ProtGPT2’s predicted sequences spanned new, rarely explored regions of protein structure and function. However, a few weeks ago, DeepMind released structures of over 200 million proteins. “So I guess we don’t have that much of a dark proteome anymore,” Ferruz says. “But still, there are regions…that haven’t been explored.”

There is plenty of work ahead, though. “I would like to have control over the design process,” Ferruz adds. “We will need to take the sequence, predict the structure, and maybe predict the function if it has any….That will be very challenging.”

Superfast operations may help neutral-atom-based machines outrun disruptions

Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum. He has written for Scientific American, The New York Times, Wired, and Science, among others.

In this conceptual diagram of the world’s fastest two-qubit gate, two atoms, separated by a couple of micrometers, are captured by optical tweezers [red light] and manipulated by a superfast, 10-picosecond laser pulse [blue light].

Quantum computers theoretically can solve problems no regular computer might ever hope to solve. However, the key ingredients of most quantum computers—quantum bits, or qubits, tied together by quantum entanglement—are highly vulnerable to disruption from their surroundings. Now scientists in Japan have successfully executed an operation with two qubits in just 6.5 nanoseconds—the fastest ever, which may essentially outrun the effects of any outside interference.

Classical computers switch transistors either on or off to symbolize data as ones or zeroes. In contrast, quantum computers use quantum bits or qubits, which because of the strange nature of quantum physics can exist in a state called superposition where they are both 1 and 0 at the same time. This essentially lets each qubit perform two calculations at once.

However, quantum computers are notoriously fragile to outside interference, such as electronic, ionic, or thermal fluctuations. This means present-day state-of-the-art quantum computers are highly prone to mistakes, typically suffering roughly one error every 1,000 operations. In contrast, many practical applications demand error rates lower by a billionfold or more.

“We can manipulate [neutral atom qubits] on completely new timescales, and it redefines what can be done with this platform.” —Sylvain de Leseleuc, the Institute for Molecular Science, Okazaki, Japan

One way to deal with the effects of noise in quantum computers is to speed up the rate at which they perform elementary operations known as quantum gates—the quantum-computing version of the logic gates that conventional computers use to perform computations. The chance that a quantum gate will experience a mistake from noise grows over time, so the faster they operate, the lower the probability they will fail.

In the new study, researchers experimented with qubits composed of neutrally charged rubidium atoms. Neutral atoms may possess a number of benefits as qubits in comparison with other quantum computing platforms.

For instance, qubits based on atoms benefit from the way these particles are virtually all identical. In contrast, qubits based on devices, such as the superconducting circuits that Google and IBM uses in their quantum computers, must cope with the problems that result from the variations between these components that inevitably result during fabrication.

Another quantum-computing platform that has attracted growing interest uses electromagnetically trapped electrically charged ions. However, ions repel each other, making it difficult to stack them in a dense manner. By comparison, scientists can pack neutral atoms closer together.

In addition, the fact that neutral atoms lack electric charge means they do not interact easily with other atoms. This make them more immune to noise and means they can stay coherent, or in superposition, for a relatively long time. For example, in May, Berkeley, Calif.–based quantum-computing startup Atom Computing revealed they could keep neutral atom qubits coherent for roughly 40 seconds, the longest coherence time ever demonstrated on a commercial platform. Moreover, neutral atoms can get cooled with lasers instead of the bulky refrigeration needed with a number of other qubit platforms, such as superconducting circuits.

The scientists first trapped and cooled neutral atoms with arrays of laser beams. They next used these lasers to excite electrons to so-called Rydberg orbitals far from their atomic nuclei. The resulting “Rydberg atoms” can be hundreds to thousands of times as large as the atoms would be in their ground states.

In theory, the giant nature of Rydberg orbitals can lead Rydberg atoms to strongly experience interactions such as entanglement with each other, enabling rapid quantum gates, says study senior author Kenji Ohmori, a quantum physicist at the Institute for Molecular Science in Okazaki, Japan. However, previously no one had realized this possibility because of factors such as the stringent requirements for the positions of the atoms.

In the new study, the researchers used laser beams to control the distance between atoms with a precision of 30 nanometers. They also cooled the atoms to an ultralow temperature about 1/100,000 of a degree above absolute zero, to reduce any jittering from heat.

The researchers next used ultrashort laser pulses that lasted just 10 picoseconds—trillionths of a second—to excite a pair of these atoms to a Rydberg state at the same time. This let them execute a quantum gate entangling the qubits in just 6.5 ns, making it the fastest quantum gate to date. (The previous speed record for a quantum gate was 15 ns, achieved by Google in 2020 with superconducting circuits.)

“We can manipulate Rydberg atoms on completely new timescales, and it redefines what can be done with this platform,” says study coauthor Sylvain de Leseleuc, a quantum physicist at the Institute for Molecular Science in Okazaki, Japan.

Rydberg-atom quantum computers typically experience an error rate from noise of a few percent per microsecond, de Leseleuc says. This new two-qubit gate is hundreds of times as fast as this error rate, suggesting that quantum computers built using this strategy may ignore the effects of noise.

Although the researchers could space the Rydberg atoms anywhere from 1.5 to 5 micrometers apart, they ultimately chose a distance of roughly 2.4 µm. The interactions between Rydberg atoms becomes stronger the closer they are, de Leseleuc says. This means a shorter distance would lead to a faster gate that was less sensitive to external noise but more difficult to control, while a greater distance would lead to a slower gate more sensitive to external noise but less difficult to control, he explains.

Future work may aim for even faster, more reliable performance with a more stable laser whose energy fluctuates less than the commercial device used in these experiments, de Leseleuc says.

“We are opening a new playground with Rydberg atoms that we could call ‘ultrafast Rydberg physics’ as well as ‘ultrafast Rydberg quantum engineering,’ ” Ohmori says.

The scientists detailed their findings online 8 August in the journal Nature Photonics.

One thing is clear…tapeouts are getting harder, and taking longer. As part of a growing suite of innovative early-stage design verification technologies, the Calibre nmLVS Recon tool enables design teams to rapidly examine dirty and immature designs to find and fix high-impact circuit errors earlier and faster, leading to an overall reduction in tapeout schedules and time to market.

Learn more in this technical paper.