登录
首页 >  科技周边 >  人工智能

谷歌研究团队成功突破AI-Guardian审核系统,利用GPT-4技术实现此目标

来源:51CTO.COM

时间:2023-08-19 14:27:34 113浏览 收藏

golang学习网今天将给大家带来《谷歌研究团队成功突破AI-Guardian审核系统,利用GPT-4技术实现此目标》,感兴趣的朋友请继续看下去吧!以下内容将会涉及到等等知识点,如果你是正在学习科技周边或者已经是大佬级别了,都非常欢迎也希望大家都能给我建议评论哈~希望能帮助到大家!

谷歌研究团队成功突破AI-Guardian审核系统,利用GPT-4技术实现此目标

根据外媒报道,谷歌的研究人员展示了OpenAI的GPT-4在研究中如何使用AI-Guardian来避免问题

AI-Guardian是一种AI审核系统,能够检测图片中是否包含不适宜内容,并识别是否有其他AI对图片进行过修改。一旦发现不适宜内容或篡改痕迹,系统将提醒管理员采取相应措施

在论文中,谷歌DeepMind研究科学家Nicholas Carlini揭示了GPT-4是如何被指导设计一种攻击方法,以规避AI-Guardian的保护措施。这个实验展示了聊天机器人在推进安全研究方面的潜在价值,并强调了GPT-4等强大语言模型对未来网络安全的影响

Carlini's research examines the development of attack strategies against AI-Guardian using OpenAI's large-scale language model, GPT-4. Initially, AI-Guardian was designed to prevent adversarial attacks by identifying and blocking inputs containing suspicious artifacts. However, Carlini's paper demonstrates that with prompted guidance, GPT-4 can overcome AI-Guardian's defense by generating scripted and visually altered images that deceive the classifier without triggering AI-Guardian's detection mechanisms.

Carlini的研究论文中提供了GPT-4建议的Python代码,该代码可以利用AI-Guardian的漏洞。因此,在原始AI-Guardian研究的威胁模型下,AI-Guardian的鲁棒性从98%下降到了仅有8%。AI-Guardian的作者承认Carlini的攻击成功地绕过了他们的防御措施

Nicholas Carlini's use of GPT-4 to outperform AI-Guardian marks a significant milestone in AI-to-AI actions. It demonstrates how language models can be utilized as research assistants to identify vulnerabilities and strengthen cybersecurity measures. While GPT-4's capabilities offer promising prospects for future security research, it also underscores the importance of human expertise and collaborative efforts. With the continuous development of artificial intelligence language models, they have the potential to revolutionize the field of cybersecurity and inspire innovative approaches to defend against adversarial attacks.

本篇关于《谷歌研究团队成功突破AI-Guardian审核系统,利用GPT-4技术实现此目标》的介绍就到此结束啦,但是学无止境,想要了解学习更多关于科技周边的相关知识,请关注golang学习网公众号!

声明:本文转载于:51CTO.COM 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>