Artificial Intelligence, Media | Fooling with pictures has never been easier

Image manipulation is not new, but rapid advances in artificial intelligence (AI) and machine learning are creating new opportunities – and ethical challenges. The term synthetic content or “synthetic media” is used in connection with content generated by artificial intelligence.

– Erase the border

The line between what is real and what is fake or artificial is a lack of clarity, summarized Anders Grimstad, Schibsted’s chief innovation officer, when he presented tech trends at an international conference for media professionals in Los Angeles on Thursday. The conference is organized by the Online News Association (ONA).

The uncertainty that Grimstad points out regarding the concerns of the digital media industry indicates, among other things, how synthetic content (metaverse, artificial intelligence, artificial images, video and audio) will change our relationship with the media and what we feel as “real”.

Read also: Biden’s video spreads quickly: – Scary and sad

The technological examples he highlighted were, among other things, the GPT-3 language model, which uses so-called “deep learning” to be able to write texts as elegantly as a human, and image software. D-E2 Which allows you to create and edit images using the words you type.

One of the questions that arose after the Grimstad crystal ball show is what we lose when a shared experience of truth, authenticity, and facts disappear.

While Norwegians still have high confidence (a record level, according to Media Survey 2022For the media, the picture in other countries is one of waning trust. Grimstad himself is unsurprisingly optimistic about the future and has a firm belief in the role of journalism when it comes to distinguishing content from false facts and that new technology opens up a number of opportunities for the media industry.

See also  The founder of "Healthy Eats" offers three tips for you who want to start your own business

Human and machine cooperation

Adapting content to different target groups and systems is one example where it is believed that bots in AI programs can contribute to:

We have to make it clear, to ourselves, but also to users, how man and machine will work together in the future, he tells Nettavisen.

No journalists want to write six or seven different summaries on the same issue. Obviously, this is something that AI can help us with. The journalist presents a realistic, powerful, addressable main story where AI tools can provide suggestions for summaries that are adapted to different target groups.

What kind of skills and competencies will the media industry demand as we move into the so-called metaverse, the new Internet?

An incredible amount of exciting things happen in 3D and what we refer to as “spatial experiences”. The tools for creating experiences like these are getting incredibly much better. Development happens all the time. This helps reduce efficiency requirements and costs. I think we’re going to need journalists who know 3D design and gaming, among other things. They will be able to develop new user experiences, but also contribute content that can be displayed on traditional surfaces.

Read also: An environmental organization has edited photos to warn against the use of plastic

Write what you want to see

Dall-E2 photo software is now in beta and is being highlighted by many as an example of the possibilities opening up with AI. It creates more or less realistic image files using the descriptions you enter in plain language.

See also  Strain,Litaction| Stryn Search: The crew will check the tracks

Need a picture of a polar bear playing bass guitar? Type “polar bear playing guitar”, and the program creates the image.

Show the Grimstad Dall-E2 used on famous artworks, such as Johannes Vermeeres’ Girl with a Pearl Earring, where the ornament is expanded and altered with colors, technique and texture identical to the original.

What is real?

Another award-winning former photojournalist, Santiago Leon, demonstrated in another program entry at the conference, he used Dall-E2 to create what at first glance looks like a real press photo. With the “Golden Gate Bridge on Fire” command, the program came out with a false news image of the famous bridge on fire.

Leon works for the Content Originality Initiative (CAI) and was present at the conference to talk about the challenges of synthetic content and the issue of originality. CAI has Adobe, The New York Times and Twitter behind it. Their goal is to develop an industry standard to identify the source of digital files and combat misinformation that threatens public trust in the media.

Read also: Shook by the Russian video: – He’s very amateur

The tool, which is being created as open source code and will gradually become available in image processing programs and in the programs of camera manufacturers, provides information about the image file and its editing history. While the standard metadata in the image file is easier to manipulate, the “stamp” of the CAI should be much more difficult to manipulate. Lyon and his colleagues are also keen to enhance ordinary people’s “literacy” when it comes to pictures, including the ability to see through illusions.

See also  Telenor launches 5G broadband TV

market growth

At the same time, the game industry and gamers who are major consumers of computer-generated images can rub their hands together, as synthetic content provides income opportunities in a large market.

In 2020, according to Anders Grimstad, so-called “skins” and “loot boxes” (costumes / skins for avatars and surprise packs) were sold in different virtual worlds (for example Fortnite) for $15 billion (NOK 157 billion). ) . This is expected to rise to $20 billion in 2025.

Hanisi Anenih

Hanisi Anenih

"Web specialist. Lifelong zombie maven. Coffee ninja. Hipster-friendly analyst."

Leave a Reply

Your email address will not be published. Required fields are marked *