WelcomeFollow us for More NewsClick here

What to expect from AI in 2023





As a rather commercially successful author once wrote, “the night is dark and full of terrors, the day bright and beautiful and full of hope.” It’s fitting imagery for AI, which like all tech has its upsides and downsides.

Art-generating models like Stable Diffusion, for instance, have led to incredible outpourings of creativity, powering apps and even entirely new business models. On the other hand, its open source nature lets bad actors use it to create deepfakes at scale — all while artists protest that it’s profiting off of their work.

What’s on deck for AI in 2023? Will regulation rein in the worst of what AI brings, or are the floodgates open? Will powerful, transformative new forms of AI emerge, à la ChatGPT

Expect more (problematic) art-generating AI apps

With the success of Lensa, the AI-powered selfie app from Prisma Labs that went viral, you can expect a lot of me-too apps along these lines. And expect them to also be capable of being tricked into creating NSFW images, and to disproportionately sexualize and alter the appearance of women.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, said he expected integration of generative AI into consumer tech will amplify the effects of such systems, both the good and the bad., disrupting industries once thought safe from automation?

Stable Diffusion, for example, was fed billions of images from the internet until it “learned” to associate certain words and concepts with certain imagery. Text-generating models have routinely been easily tricked into espousing offensive views or producing misleading content.

Mike Cook, a member of the Knives and Paintbrushes open research group, agrees with Gahntz that generative AI will continue to prove a major — and problematic — force for change. But he thinks that 2023 has to be the year that generative AI “finally puts its money where its mouth is.”

“It’s not enough to motivate a community of specialists [to create new tech] — for technology to become a long-term part of our lives, it has to either make someone a lot of money, or have a meaningful impact on the daily lives of the general public,” Cook said. “So I predict we’ll see a serious push to make generative AI actually achieve one of these two things, with mixed success.”

Artists lead the effort to opt out of data sets

DeviantArt released an AI art generator built on Stable Diffusion and fine-tuned on artwork from the DeviantArt community. The art generator was met with loud disapproval from DeviantArt’s longtime denizens, who criticized the platform’s lack of transparency in using their uploaded art to train the system.

The creators of the most popular systems — OpenAI and Stability AI — say that they’ve taken steps to limit the amount of harmful content their systems produce. But judging by many of the generations on social media, it’s clear that there’s work to be done.

“The data sets require active curation to address these problems and should be subjected to significant scrutiny, including from communities that tend to get the short end of the stick,” Gahntz said, comparing the process to ongoing controversies over content moderation in social media.

Stability AI, which is largely funding the development of Stable Diffusion, recently bowed to public pressure, signaling that it would allow artists to opt out of the dataset used to train the next-generation Stable Diffusion model. Through the website HaveIBeenTrained.com, rightsholders will be able to request opt-outs before training begins in a few weeks’ time.

OpenAI offers no such opt-out mechanism, instead preferring to partner with organizations like Shutterstock to license portions of their image galleries. But given the legal and sheer publicity headwinds it faces alongside Stability AI, it’s likely only a matter of time before it follows suit.

The courts may ultimately force its hand. In the U.S., Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by letting Copilot, GitHub’s service that intelligently suggests lines of code, regurgitate sections of licensed code without providing credit.

Perhaps anticipating the legal challenge, GitHub recently added settings to prevent public code from showing up in Copilot’s suggestions and plans to introduce a feature that will reference the source of code suggestions. But they’re imperfect measures. In at least one instance, the filter setting caused Copilot to emit large chunks of copyrighted code, including all attribution and license text.

Expect to see criticism ramp up in the coming year, particularly as the U.K. mulls over rules that would remove the requirement that systems trained through public data be used strictly non-commercially.

Post a Comment

0 Comments