Everyday AI: A closer look at artificial intelligence trends taking over social media, mobile apps


When David Holz, founder and CEO of AI art generator Midjourney, thinks about the future of technology in his industry, he likens it to water.

Water can be dangerous. We can easily drown in water. But this is unintentional. And the challenge that people face when it comes to water is learning to swim, building boats and dams, and finding ways to harness its power.

“You can make two images, and it’s cool, but you make 100,000 images, and you have an actual physical feeling of drowning,” Holz said in an interview with luck. “So we’re trying to figure out how do you teach people to swim? And how do you build these boats that allow them to navigate and are empowered and kind of sail the ocean of imagination, instead of drowning ?”

AI image generators have proliferated in Silicon Valley and gone viral on social media. Just a few weeks ago, it was almost impossible to scroll Instagram without seeing the “magical avatar” of Lensa AI, which are colorful digital selfies created with an AI-powered editing app.

“In the last 12 months, the development of these technologies has been enormous,” said Mhairi Aitken, a behavioral fellow at The Alan Turing Institute, the UK’s national institute for data science and AI. Used by users. [A.I. image generators] to create a particular output without necessarily understanding what the process is creating, or the technology behind it.

Lensa AI is an all-in-one image editing app for selfies and photo retouching.

Courtesy of Lensa AI

The models behind these AI image generators have infiltrated smartphones as new discoveries deepen the models’ ability to understand language and also produce more realistic photos. “You teach the system to become familiar with many elements of the world,” Holz explained.

As a result, almost any user can design, process, and rework their own facial expressions in images uploaded to apps like Lensa AI, which launched last year. and has over a million subscribers. In the future, Lensa says that it is looking to develop the model into a one-stop-shop that can meet all the needs of users around the creation of visual content and photography.

Art made by AI first emerged in the 1960s, but many of the models in use today are in their infancy. Midjourney, DALL E 2, and Imagen—some of the most prominent players in the space—all debut in 2022. Some of the world’s biggest tech giants are paying close attention. Google’s text-to-image beta AI model is Imagen, while there are reports that Microsoft pondered investment of $10 billion of OpenAI, whose models include the chatbot ChatGPT and DALL E 2.

“These are some of the largest, most complex AI models that have been deployed in a consumer way,” Holz said. “This is the first time that a regular person interacts with these large and complex new AI models, which will define the next decade.”

An example of a photo retouched by Lensa AI.

Courtesy of Lensa AI

But the new technology also raises ethical questions about potential online harassment, deepfakes, consent, the hypersexualization of women, and the copyright and work security of visual artists.

Holz acknowledges that AI image generators, as is the case with most new technological developments, have significant male biases. The people behind these models still have work to do to learn the rules behind AI image generation and more women should have a decisive role in how this technology develops. .

At Midjourney, there was a discussion about whether the lab should allow users to upload sexual images. Take the example of a woman wearing a bikini at the beach. Should that be allowed? Midjourney brought together a group of women to finally decide that yes, the community can create bikini images, but those images will be private to the user and not shared with the entire system.

“I don’t want to hear a guy’s opinion on this,” Holz said. Certain phrases are blocked in Midjourney to prevent malicious images from multiplying within the system.

“Midjourney strives to be a safe space for all ages, and all genders,” Holz said. “We are more than Disney in space.”

On the one hand, a dark argument could be made that AI image generators—which, again, have no human purpose—are simply reflecting our society back to us. But Aitken says that’s not enough. “It’s not just a matter of getting the data available and saying, ‘That’s it,'” Aitken said. “We make choices about the data and whose experiences are represented.”

Aitken added that “we need to think more about representation within the tech industry. And can we ensure more diversity within the processes, because it’s often the case that when biases emerge in datasets, this is because they have not been anticipated in the design process or development process.

An example of a photo retouched by Lensa AI.

Courtesy of Lensa AI

Concerns about how these models can be used for harassment, promote prejudice, or create harmful images have led to calls for greater guardrails. Google’s own research shows some mixed views about the social impact of the text-to-image generation. The concerns were so great that the tech giant chose not to release code or demo of Imagen to the public. Governments may also need to take regulatory action. China’s Cyberspace Administration of China has a new law that took effect in January which requires AI images to be watermarked and consent from individuals before a deep fake is made on them.

Visual artists also expressed concern about how this new technology could infringe on their rights, or even take away the work they had previously paid for. The San Francisco Ballet recently experimented with Midjourney tech to create a digital, AI image for their production of The Nutcracker. Users flooded the post on social media on Instagram with complaints.

In January, a group of AI image generators—including Midjourney named in a lawsuit claiming that the dataset used for their products was trained on “billions of copyrighted images” and downloaded and used without payment or permission from the artists. The lawsuit alleges violations of California’s unfair competition laws and protecting artists’ intellectual property similar to what happened when streaming music tech emerged. The case was filed after luckMidjourney interview, and the publication has reached out to Midjourney for further comment.

Holz said most of the people who use Midjourney are not artists, and very few people sell images created from the model.

“It’s almost like the word AI is poisonous, because we inevitably think it’s here to replace us and kill us,” Holz said. “An important thing is to think about how we make people better, rather than how we replace people.”

Our new weekly Impact Report newsletter explores how ESG news and trends are shaping the roles and responsibilities of today’s executives. Subscribe here.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *