登录
首页 >  科技周边 >  人工智能

AI诈骗有多恐怖?眼见不一定为实,人人都可能上当

来源:搜狐

时间:2023-05-30 11:09:23 227浏览 收藏

积累知识,胜过积蓄金银!毕竟在科技周边开发的过程中,会遇到各种各样的问题,往往都是一些细节知识点还没有掌握好而导致的,因此基础知识点的积累是很重要的。下面本文《AI诈骗有多恐怖?眼见不一定为实,人人都可能上当》,就带大家讲解一下知识点,若是你对本文感兴趣,或者是想搞懂其中某个知识点,就请你继续往下看吧~

传统诈骗我们以前都听说过,比如伪装成亲友,通过聊天软件和你联系,或者盗用你朋友的账号和你联系,一般都是通过文字来进行联系,最终目的都是找你要钱。一般我们通过视频聊天确认一下,稍微留个心眼,都不容易被骗。但是,AI新技术被应用在诈骗上,事情变得不一样了。

AI诈骗有多恐怖?眼见不一定为实,人人都可能上当

AI换脸骗术

最新出现的AI换脸骗术屡屡得手。诈骗分子通过照片生成实时的AI视频,通过声音克隆技术,复刻声音。然后,再添加受害人进行视频通话。受害人看到的就是自己的亲朋好友,因为遇到一些紧急的情况需要用钱。脸是亲朋好友的脸,声音也是亲朋好友的声音,已经通过视频确认了真实性,所以他们就不会想太多,乖乖地就把钱给转了。

AI诈骗技术让眼见不一定为实,让大家常用的视频确认信息的防骗方法失效了,这就是这种诈骗形式的恐怖之处。

AI诈骗有多恐怖?眼见不一定为实,人人都可能上当

AI技术有可能应用于更严重的犯罪

可以预见,AI技术还有可能应用于更严重的犯罪活动。比如,传统的绑架案,被害人一般不会老实地按照要求联系家人,犯罪分子索要财物的时候就会暴露目的。但是,AI技术的出现,可以让犯罪分子通过采集被害人的信息造出一个和被害人一模一样的AI机器人。然后利用被害人的社交通讯录,做到让被害人说什么就说什么,神不知鬼不觉地实施犯罪。

再如,以往的传销案件,被限制人身自由的受害人是在被逼迫的情况下向亲友要钱。未来犯罪分子可能利用AI技术,配合编好的话术,让克隆出来的受害人,对自己通讯录上的亲友进行批量诈骗。一旦受害人的手机被犯罪分子控制住,甚至通过电话确认的方式,得到可能也是AI虚拟人的回复。

AI诈骗有多恐怖?眼见不一定为实,人人都可能上当

如何预防AI欺诈

我认为预防AI欺诈要从多方面入手:

首先,通信工具需要进行技术革新,要增加真人验证功能,这个功能已经存在,被应用于短视频直播平台。我认为可以应用于通信工具之中,一旦发现通讯内容可能由AI生成,系统应当予以提示。

其次,亲友之间、公司内部最好可以设置一个通信暗语,比如,提问“芝麻开门”,回答“武松打虎”,这样的随机内容只有亲友间和公司内部知道,如果发生AI诈骗,这样的暗语被犯罪分子破解的概率很小,可以增加通信的安全性。

有条件的情况下,涉及到大额转账,最好还是能够当面确认。即便不能当面确认,也要打电话、开视频,多重确认。多增加一些防范措施,才不容易被骗。

英文版:How terrifying are AI scams? Seeing isn't always believing, and anyone could fall for them.

We've all heard of traditional scams before, such as impersonating relatives and contacting you through chat software, or compromising a friend's account to contact you; usually the communication is done through text, and the ultimate goal is to get you to give them money. Normally, we can confirm with video chat and be a little more cautious to avoid being scammed. However, with the application of new AI technology in scams, things have become different.

AI Face-Swapping Scams

The newly emerged AI face-swapping scams have been successful repeatedly. Scammers create real-time AI videos from photos and utilize voice cloning technology to replicate the victim's voice. They then add the victim into the video call, making it appear as if the victim is speaking to a trusted friend or relative who claims to be in an urgent situation requiring money. The face of the scammer is that of the victim's friend or relative, and the voice sounds precisely like theirs. Since the victim has already verified the authenticity of the video through a video call, they are unlikely to question it further and transfer the money obediently.

AI fraud technologies make seeing no longer equivalent to believing, and traditional anti-scam techniques relying on video confirmation have become ineffective; this is the horror behind this type of scam.

AI technology could potentially be applied to more serious crimes.

It's foreseeable that AI technology could be applied in more serious criminal activities. For instance, in traditional kidnapping cases, victims usually won't comply with the kidnapper's demands and contact their families as instructed, which exposes the criminals' intentions when they start asking for ransom. However, with the emergence of AI technology, criminals can use the victim's information to create an AI robot that resembles the victim. By utilizing the victim's social communication network, they can make the victim say whatever they want without being noticed and commit the crime undetected.

Furthermore, in past pyramid scheme cases, victims who were restricted of their freedom would only ask their relatives and friends for money under duress. In the future, criminals may use AI technology and well-crafted language to clone victims and conduct mass fraud against their contacts on the victim's communication list. Once the victim's phone is under the control of the criminals, even confirmation calls could receive replies from an AI virtual person.

How to prevent AI fraud:

I think there are several ways to prevent AI fraud:

Firstly, communication tools need technological innovation to incorporate human verification features. This feature already exists and is used in short video live-streaming platforms. I believe it could be applied to communication tools so that once the system detects that the communication content may have been generated by AI, it can issue a prompt.

Secondly, it would be better for friends, family members, or companies to set up a communication password. For example, asking "Sesame open the door" and responding with "Wusong fights the tiger," only known among friends, family, and employees. If AI fraud occurs, the likelihood of criminals cracking such passwords is minimal, which can increase communication security.

If possible, particularly when it comes to large transfer amounts, it is best to confirm transactions in person. Even if face-to-face confirmation is not possible, multiple confirmations should be made over the phone or through video calls. By implementing multiple precautions, it becomes less susceptible to fraud attempts.

文中关于AI诈骗,上当,眼见不实的知识介绍,希望对你的学习有所帮助!若是受益匪浅,那就动动鼠标收藏这篇《AI诈骗有多恐怖?眼见不一定为实,人人都可能上当》文章吧,也可关注golang学习网公众号了解相关技术文章。

声明:本文转载于:搜狐 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>