Earth Science News
ROBO SPACE
Explained: Generative AI
Instant artwork from AI services can create perfect results in minutes as seen in this image created to illustrate a recent UN Security Council meeting on the implications of AI on global security.
Explained: Generative AI
by Adam Zewe | MIT News
Boston MA (SPX) Nov 10, 2023

A quick scan of the Explained: Generative AIs makes it seem like generative artificial intelligence is everywhere these days. In fact, some of those Explained: Generative AIs may actually have been written by generative AI, like OpenAI's ChatGPT, a chatbot that has demonstrated an uncanny ability to produce text that seems to have been written by a human.

But what do people really mean when they say "generative AI?"

Before the generative AI boom of the past few years, when people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.

Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.

"When it comes to the actual machinery underlying generative AI and other types of AI, the distinctions can be a little bit blurry. Oftentimes, the same algorithms can be used for both," says Phillip Isola, an associate professor of electrical engineering and computer science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

And despite the hype that came with the release of ChatGPT and its counterparts, the technology itself isn't brand new. These powerful machine-learning models draw on research and computational advances that go back more than 50 years.

An increase in complexity
An early example of generative AI is a much simpler model known as a Markov chain. The technique is named for Andrey Markov, a Russian mathematician who in 1906 introduced this statistical method to model the behavior of random processes. In machine learning, Markov models have long been used for next-word prediction tasks, like the autocomplete function in an email program.

In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a few previous words. But because these simple models can only look back that far, they aren't good at generating plausible text, says Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, who is also a member of CSAIL and the Institute for Data, Systems, and Society (IDSS).

"We were generating things way before the last decade, but the major distinction here is in terms of the complexity of objects we can generate and the scale at which we can train these models," he explains.

Just a few years ago, researchers tended to focus on finding a machine-learning algorithm that makes the best use of a specific dataset. But that focus has shifted a bit, and many researchers are now using larger datasets, perhaps with hundreds of millions or even billions of data points, to train models that can achieve impressive results.

The base models underlying ChatGPT and similar systems work in much the same way as a Markov model. But one big difference is that ChatGPT is far larger and more complex, with billions of parameters. And it has been trained on an enormous amount of data - in this case, much of the publicly available text on the internet.

In this huge corpus of text, words and sentences appear in sequences with certain dependencies. This recurrence helps the model understand how to cut text into statistical chunks that have some predictability. It learns the patterns of these blocks of text and uses this knowledge to propose what might come next.

More powerful architectures
While bigger datasets are one catalyst that led to the generative AI boom, a variety of major research advances also led to more complex deep-learning architectures.

In 2014, a machine-learning architecture known as a generative adversarial network (GAN) was proposed by researchers at the University of Montreal. GANs use two models that work in tandem: One learns to generate a target output (like an image) and the other learns to discriminate true data from the generator's output. The generator tries to fool the discriminator, and in the process learns to make more realistic outputs. The image generator StyleGAN is based on these types of models.

Diffusion models were introduced a year later by researchers at Stanford University and the University of California at Berkeley. By iteratively refining their output, these models learn to generate new data samples that resemble samples in a training dataset, and have been used to create realistic-looking images. A diffusion model is at the heart of the text-to-image generation system Stable Diffusion.

In 2017, researchers at Google introduced the transformer architecture, which has been used to develop large language models, like those that power ChatGPT. In natural language processing, a transformer encodes each word in a corpus of text as a token and then generates an attention map, which captures each token's relationships with all other tokens. This attention map helps the transformer understand context when it generates new text.

These are only a few of many approaches that can be used for generative AI.

A range of applications
What all of these approaches have in common is that they convert inputs into a set of tokens, which are numerical representations of chunks of data. As long as your data can be converted into this standard, token format, then in theory, you could apply these methods to generate new data that look similar.

"Your mileage might vary, depending on how noisy your data are and how difficult the signal is to extract, but it is really getting closer to the way a general-purpose CPU can take in any kind of data and start processing it in a unified way," Isola says.

This opens up a huge array of applications for generative AI.

For instance, Isola's group is using generative AI to create synthetic image data that could be used to train another intelligent system, such as by teaching a computer vision model how to recognize objects.

Jaakkola's group is using generative AI to design novel protein structures or valid crystal structures that specify new materials. The same way a generative model learns the dependencies of language, if it's shown crystal structures instead, it can learn the relationships that make structures stable and realizable, he explains.

But while generative models can achieve incredible results, they aren't the best choice for all types of data. For tasks that involve making predictions on structured data, like the tabular data in a spreadsheet, generative AI models tend to be outperformed by traditional machine-learning methods, says Devavrat Shah, the Andrew and Erna Viterbi Professor in Electrical Engineering and Computer Science at MIT and a member of IDSS and of the Laboratory for Information and Decision Systems.

"The highest value they have, in my mind, is to become this terrific interface to machines that are human friendly. Previously, humans had to talk to machines in the language of machines to make things happen. Now, this interface has figured out how to talk to both humans and machines," says Shah.

Raising red flags
Generative AI chatbots are now being used in call centers to field questions from human customers, but this application underscores one potential red flag of implementing these models - worker displacement.

In addition, generative AI can inherit and proliferate biases that exist in training data, or amplify hate speech and false statements. The models have the capacity to plagiarize, and can generate content that looks like it was produced by a specific human creator, raising potential copyright issues.

On the other side, Shah proposes that generative AI could empower artists, who could use generative tools to help them make creative content they might not otherwise have the means to produce.

In the future, he sees generative AI changing the economics in many disciplines.

One promising future direction Isola sees for generative AI is its use for fabrication. Instead of having a model make an image of a chair, perhaps it could generate a plan for a chair that could be produced.

He also sees future uses for generative AI systems in developing more generally intelligent AI agents.

"There are differences in how these models work and how we think the human brain works, but I think there are also similarities. We have the ability to think and dream in our heads, to come up with interesting ideas or plans, and I think generative AI is one of the tools that will empower agents to do that, as well," Isola says.

ai.spacedaily.com analysis

Relevance Scores:

1. Space Industry Analyst: 2/10
2. Space Finance Analyst: 2/10
3. Space Policy Maker: 3/10
4. Space S and T Professional: 5/10

Analyst Summary:

The relevance of generative AI to the space sector may not be immediately apparent, but it holds potential for various applications. For industry analysts, the direct impact is minimal, hence the low score. However, finance analysts might see potential for long-term investments in crossover technologies. Policy makers could be interested in the ethical and regulatory implications of AI advancements, which can extend to space technologies. Science and technology professionals within the space sector are likely to find the most relevance, as generative AI could aid in data analysis, mission simulations, and design of materials or structures.

Contextual Background:

Generative AI has seen a similar trajectory to many space technologies, where initial theoretical and small-scale applications eventually lead to significant breakthroughs. Just as the development of rocket technology led to the space race and satellite communications, the advances in AI from basic Markov models to sophisticated systems like ChatGPT mark a similar evolution in computational capabilities.

Historical Comparison:

Over the last two decades, the space industry has witnessed significant events like the commercialization of spaceflight and the advent of reusable rockets. Generative AI, while in a different domain, parallels this with its transition from academia to widespread commercial and creative use.

Relevance Criteria:

To ensure consistency, relevance scoring should consider the technological convergence potential, data dependency, and scalability of impact. Generative AI's relevance to space is more about potential future applications than immediate impact.

International Implications:

Generative AI fits into global trends of digital transformation and automation. In the space industry, international cooperation or competition in AI could parallel the partnerships and rivalries seen in space exploration and satellite technology.

Investigative Questions:

1. How can generative AI enhance data analysis for space missions?

2. What are the implications of AI in satellite image processing and interpretation?

3. Could generative AI improve design and simulation processes for spacecraft?

4. What policies are needed to govern AI use in sensitive space-related applications?

5. How will international AI advancements influence global space technology leadership?

Related Links
Computer Science and Artificial Intelligence Laboratory
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
Wearable devices may prevent astronauts getting 'lost' in space
Waltham MA (SPX) Nov 06, 2023
The sky is no longer the limit - but taking flight is dangerous. In leaving the Earth's surface, we lose many of the cues we need to orient ourselves, and that spatial disorientation can be deadly. Astronauts normally need intensive training to protect against it. But scientists have now found that wearable devices which vibrate to give orientation cues may boost the efficacy of this training significantly, making spaceflight slightly safer. "Long duration spaceflight will cause many physiological ... read more

ROBO SPACE
Amid shortages in war-torn Gaza, doctors perform surgery with no anesthesia

G7 foreign ministers call for 'urgent' humanitarian pause in Gaza

US Supreme Court weighs whether abusers have right to own guns

U.N. pleads for Gaza access; Netanyahu offers 'tactical little pauses' but no cease-fire

ROBO SPACE
Nations start negotiations over global plastics treaty

EU agrees plan to secure raw materials supply

'Call of Duty', the stalwart video game veteran, turns 20

World-first Zero Debris Charter goes live

ROBO SPACE
China's military on 'concerning trajectory': US general

El Nino set to last at least til April: UN

Crust-forming algae are displacing corals in tropical waters worldwide

SWOT takes unprecedented view of global sea levels

ROBO SPACE
Greenland's ice shelves have lost more than a third of their volume

How a climate model can illustrate and explain ice-age climate variability

Increased West Antarctic ice sheet melting 'unavoidable'

Light, freshwater sticks to Greenland's east coast

ROBO SPACE
Global wine production hits lowest level since 1961

Italy's olive growers lament poor harvests from extreme weather

Hydrosat contracts Muon Space to Integrate Multispectral and IR imaging instruments

Australian woman arrested in lethal mushroom mystery

ROBO SPACE
NASA Analysis Finds Strong El Nino Could Bring Extra Floods This Winter

Two weeks after Hurricane Otis, Acapulco shadow of former self

Philippine typhoon survivors pray for victims on 10th anniversary

Magnitude 6.7 earthquake hits Indonesia's Banda Sea: USGS

ROBO SPACE
Somalia's Al-Shabaab offensive stalls after early success

Burkina conscripts dissidents in anti-jihadist fight: HRW

Togo soldiers jailed over murder of colonel close to president

Defence ministers of Russia, Burkina discuss military cooperation

ROBO SPACE
How "blue" and "green" appeared in a language that didn't have words for them

Brain health in over 50s deteriorated more rapidly during the pandemic

Eternal rest -- at the foot of a tree

Iraq dig unearths 2,700-year-old winged sculpture largely intact

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.