Unleashing Curiosity, Igniting Discovery - The Science Fusion

Is ChatGPT Becoming Less Reliable as Hollywood Goes on Strike?

Hollywood actors strike

Hollywood actors are concerned about AIJim Ruymen/UPI Credit: UPI/Alamy

Hollywood actors go on strike over the use of artificial intelligence in films, among other issues. Machine learning can now generate images, novels, and source code from scratch. Except it’s not really from scratch because a massive amount of human-generated examples are required to train these AI models, which has enraged artists, programmers, and writers and resulted in a slew of lawsuits.

Hollywood actors are the most recent creatives to oppose AI. They are concerned that film studios will use their likeness to have them “star” in films without them ever being on set, possibly taking on roles they would rather avoid and uttering lines or acting out scenes they would find repulsive. Worse, they may not be compensated for it.

That is why the Screen Actors Guild and the American Federation of Television and Radio Artists (SAG-AFTRA), which has 160,000 members, are on strike until the studios negotiate AI rights. Simultaneously, Netflix has come under fire from actors for posting a job listing for people with AI experience, with a salary of up to $900,000.

The quality of images generated by AI may deteriorate over time.University of Rice AIs that have been trained on AI-generated images produce glitches and blurs. In terms of training data, we wrote last year that the proliferation of AI-generated images could pose a problem if they became widely available online, as new AI models would gobble them up to train on. Experts predicted that the end result would be deteriorating quality. At the risk of sounding dated, AI would gradually destroy itself, like a degraded photocopy of a photocopy of a photocopy.

Fast forward a year, and that appears to be exactly what is happening, prompting another group of researchers to issue the same warning. A team from Rice University in Texas discovered evidence that AI-generated images that were introduced into training data in large numbers gradually distorted the output. But there is hope: the researchers discovered that if the number of images was kept below a certain threshold, the degradation could be avoided.

Hollywood actors strike

Is ChatGPT becoming less adept at math problems?

Corrupted training data is just one example of how AI can begin to fail. This month, one study claimed that ChatGPT was getting worse at maths problems. When asked to determine whether 500 numbers were prime, the March version of GPT-4 scored 98% accuracy, while the June version scored only 2.4%. Surprisingly, the accuracy of GPT-3.5 appeared to increase from 7.4 percent in March to nearly 87 percent in June.

Arvind Narayanan of Princeton University, who discovered other changing performance levels in another study, attributes the problem to “an unintended side effect of fine-tuning.” To put it simply, the creators of these models are tinkering with them in order to make the outputs more reliable, accurate, or – potentially – less computationally intensive in order to save money. And, while this may benefit some tasks, it may harm others. As a result, while AI may perform well now, a future version may perform significantly worse, and it may not be obvious why.

AI

 

Using larger AI training data sets may result in more racist outcomes. It is no secret that many of the recent advances in AI have simply resulted from scale: larger models, more training data, and more computer power. This has made AIs more expensive, unwieldy, and resource hungry, but it has also made them far more capable.

There is certainly a lot of research being conducted to shrink AI models and make them more efficient, as well as work on more graceful methods to advance the field. However, scale has played a significant role in the game.

However, there is evidence that this could have serious consequences, such as making models even more racist. Researchers conducted experiments on two open-source data sets, one containing 400 million samples and the other containing 2 billion. They discovered that models trained on the larger data set were more than twice as likely to categorise Black female faces as “criminal” and five times more likely to categorise Black male faces as “criminal.”

AI can recognise targets Athena AI Drones with AI targeting systems are said to be ‘better than humans.’
We covered the strange story of the AI-powered drone that “killed” its operator to get to its intended target earlier this year, which was complete nonsense. The US Air Force quickly denied the story, but that didn’t stop it from being reported around the world.

Now, there are new claims that AI models can identify targets better than humans – though the details are too closely guarded to reveal and thus verify. “It can detect whether people are wearing a specific type of uniform, whether they are carrying weapons, and whether they are surrendering,” says a spokesperson for the software’s developer. Let us hope they are correct and that AI can do a better job of waging war than identifying prime numbers.

Share this article
Shareable URL
Prev Post

Miniature Robot Utilizes Heat to Stanch Internal Bleeding

Next Post

Solar-Powered Fuel Cell Converts Plastic Waste and Carbon Dioxide into Renewable Energy

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
"Stay ahead of the curve and fuel your curiosity! Don't miss out on our mind-boggling updates and fascinating insights. Subscribe to our newsletter now and embark on an exhilarating journey through the realms of science and discovery!" 🚀🔬📚