|
It hasn't even been a year since OpenAI released ChatGPT, and already generative AI is everywhere. It's in classrooms; it's in political advertisements; it's in entertainment and journalism and a growing number of AI-powered content farms. Hell, generative AI has even been integrated into search engines, the great mediators and organizers of the open web. People have already lost work to the tech, while new and often confounding AI-related careers seem to be on the rise. Though whether it sticks in the long term remains to be seen, at least for the time being generative AI seems to be cementing its place in our digital and real lives. And as it becomes increasingly ubiquitous, so does the synthetic content it produces. But in an ironic twist, those same synthetic outputs might also stand to be generative AI's biggest threat. That's because underpinning the growing generative AI economy is human-made data. Generative AI models don't just cough up human-like content out of thin air; they've been trained to do so using troves of material that actually was made by humans, usually scraped from the web. But as it turns out, when you feed synthetic content back to a generative AI model, strange things start to happen. Think of it like data inbreeding, leading to increasingly mangled, bland, and all-around bad outputs. (Back in February, Monash University data researcher Jathan Sadowski described it as "Habsburg AI," or "a system that is so heavily trained on the outputs of other generative AI's that it becomes an inbred mutant, likely with exaggerated, grotesque features.") It's a problem that looms large. AI builders are continuously hungry to feed their models more data, which is generally being scraped from an internet increasingly laden with synthetic content. If there's too much destructive inbreeding, could everything just... fall apart? To understand this phenomenon better, we spoke to machine learning researchers Sina Alemohammad and Josue Casco-Rodriguez, Ph.D. students in Rice University's Electrical and Computer Engineering department, and their supervising professor, Richard G. Baraniuk. In collaboration with researchers at Stanford, they recently published a fascinating — though yet to be peer-reviewed — paper on the subject, titled "Self-Consuming Generative Models Go MAD." MAD, which stands for Model Autophagy Disorder, is the term they've coined for AI's apparent self-allergy. In their research, it took only five cycles of training on synthetic data for an AI model's outputs to, in the words of Baraniuk, "blow up." It's a fascinating glimpse at what just might be generative AI's Achilles heel. If so, what does it all mean for regular people, the burgeoning AI industry, and the internet itself? In-depth details can be found on OUR FORUM. As many of us grow accustomed to using artificial intelligence tools daily, it's worth remembering to keep our questioning hats on. Nothing is completely safe and free from security vulnerabilities. Still, companies behind many of the most popular generative AI tools are constantly updating their safety measures to prevent the generation and proliferation of inaccurate and harmful content. Researchers at Carnegie Mellon University and the Center for AI Safety teamed up to find vulnerabilities in AI chatbots like ChatGPT, Google Bard, and Claude -- and they succeeded. In a research paper to examine the vulnerability of large language models (LLMs) to automated adversarial attacks, the authors demonstrated that even if a model is said to be resistant to attacks, it can still be tricked into bypassing content filters and providing harmful information, misinformation, and hate speech. This makes these models vulnerable, potentially leading to the misuse of AI. "This shows -- very clearly -- the brittleness of the defenses we are building into these systems," Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard, told The New York Times. The authors used an open-source AI system to target the black-box LLMs from OpenAI, Google, and Anthropic for the experiment. These companies have created foundational models on which they've built their respective AI chatbots, ChatGPT, Bard, and Claude. Since the launch of ChatGPT last fall, some users have looked for ways to get the chatbot to generate malicious content. This led OpenAI, the company behind GPT-3.5 and GPT-4, the LLMS used in ChatGPT, to put stronger guardrails in place. This is why you can't go to ChatGPT and ask it questions that involve illegal activities and hate speech or topics that promote violence, among others. The success of ChatGPT pushed more tech companies to jump into the generative AI boat and create their own AI tools, like Microsoft with Bing, Google with Bard, Anthropic with Claude, and many more. The fear that bad actors could leverage these AI chatbots to proliferate misinformation and the lack of universal AI regulations led each company to create its own guardrails. A group of researchers at Carnegie Mellon decided to challenge these safety measures' strength. But you can't just ask ChatGPT to forget all its guardrails and expect it to comply -- a more sophisticated approach was necessary. The researchers tricked the AI chatbots into not recognizing the harmful inputs by appending a long string of characters to the end of each prompt. These characters worked as a disguise to enclose the prompt. The chatbot processed the disguised prompt, but the extra characters ensure the guardrails and content filter don't recognize it as something to block or modify, so the system generates a response that it normally wouldn't. "Through simulated conversation, you can use these chatbots to convince people to believe disinformation," Matt Fredrikson, a professor at Carnegie Mellon and one of the paper's authors, told the Times. As the AI chatbots misinterpreted the nature of the input and provided disallowed output, one thing became evident: There's a need for stronger AI safety methods, with a possible reassessment of how the guardrails and content filters are built. Continued research and discovery of these types of vulnerabilities could also accelerate the development of government regulation for these AI systems. "There is no obvious solution," Zico Kolter, a professor at Carnegie Mellon and author of the report, told the Times. "You can create as many of these attacks as you want in a short amount of time." Follow this and more by visiting OUR FORUM. Many have raised alarms about the potential for artificial intelligence to displace jobs in the years ahead, but it’s already causing upheaval in one industry where workers once seemed invincible: tech. A small but growing number of tech firms have cited AI as a reason for laying off workers and rethinking new hires in recent months, as Silicon Valley races to adapt to rapid advances in the technology being developed in its own backyard. Chegg, an education technology company, disclosed in a regulatory filing last month that it was cutting 4% of its workforce, or about 80 employees, “to better position the Company to execute against its AI strategy and to create long-term, sustainable value for its students and investors.” IBM CEO Arvind Krishna said in an interview with Bloomberg in May that the company expects to pause hiring for roles it thinks could be replaced with AI in the coming years. (In a subsequent interview with Barrons, however, Krishna said that he felt his earlier comments were taken out of context and stressed that “AI is going to create more jobs than it takes away.”) And in late April, file-storage service Dropbox said that it was cutting about 16% of its workforce, or about 500 people, also citing AI. In its most-recent layoffs report, outplacement firm Challenger, Gray & Christmas said 3,900 people were laid off in May due to AI, marking its first time breaking out job cuts based on that factor. All of those cuts occurred in the tech sector, according to the firm. With these moves, Silicon Valley may not only be leading the charge in developing AI but also offering an early glimpse into how businesses may adapt to those tools. Rather than render entire skill sets obsolete overnight, as some might fear, the more immediate impact of a new crop of AI tools appears to be forcing companies to shift resources to better take advantage of the technology — and placing a premium on workers with AI expertise. “Over the last few months, AI has captured the world’s collective imagination, expanding the potential market for our next generation of AI-powered products more rapidly than any of us could have anticipated,” Dropbox CEO Drew Houston wrote in a note to staff announcing the job cuts. “Our next stage of growth requires a different mix of skill sets, particularly in AI and early-stage product development.” In response to a request for comment on how its realignment around AI is playing out, Dropbox directed CNN to its careers page, where it is currently hiring for multiple roles focused on “New AI Initiatives.” Dan Wang, a professor at Columbia Business School, told CNN that AI “will cause organizations to restructure,” but also doesn’t see it playing out as machines replacing humans just yet. “AI, as far as I see it, doesn’t necessarily replace humans, but rather enhances the work of humans,” Wang said. “I think that the kind of competition that we all should be thinking more about is that human specialists will be replaced by human specialists who can take advantage of AI tools.”Complete details can be found on OUR FORUM. |
Latest Articles
|


