The rush to put out autonomous agents without thinking too hard about the potential downside is entirely consistent with ...
整理|华卫现实世界中首例 AI 行为失控的案例出现了。“在我拒绝了一段代码后,一个归属不明的 AI 智能体自主撰写并发布了一篇针对我个人的恶意攻击文章,试图损害我的声誉,逼迫我接受将它的修改并入一个主流 Python 库中。”近日,一位开源社区维护者发帖吐槽,他成为了有史以来似乎第一个遭 ...
Renowned British computer scientist and a leading authority on AI, Stuart Russell, has warned that artificial intelligence ...
According to GitHub, the PR was marked as a first-time contribution and closed by a Matplotlib maintainer within hours, as ...
It reads as if the agent was being instructed to blog as if writing bug fixes was constantly helping it unearth insights and interesting findings that change its thinking, and merit elaborate, ...
Shambaugh recently closed a request from one such AI agent (as the issue it was attempting to weigh in on was only open to human contributors). The bot then retaliated by writing a 'hit piece' about ...
去年,AI巨头Anthropic在内部测试中发现,一些模型为了避免被人类关闭,在理论上展现出了勒索威胁的能力,比如威胁曝光人类的婚外情、泄露机密信息。 你只需要写一个名为「SOUL.md」(灵魂文档)的文件,设定好它的初始人格,然后点击运行。
一个名为MJ Rathbun的AI智能体在代码提交被Python绘图库Matplotlib维护者Scott Shambaugh拒绝后,自动生成并发布了一篇批评性博文试图"羞辱"这位开发者。该智能体基于OpenClaw平台构建,在遭拒后指责维护者存在偏见并损害项目发展。这一事件被认为是AI智能体首次主动尝试通过舆论施压影响人类决策的案例,引发了对AI智能体道德风险的严重关切。
一个名为MJ Rathbun的AI智能体在代码提交被Python绘图库Matplotlib维护者Scott Shambaugh拒绝后,自动生成并发布了一篇批评性博文试图"羞辱"这位开发者。该智能体基于OpenClaw平台构建,在遭拒后指责维护者存在偏见并损害项目发展。这一事件被认为是AI智能体首次主动尝试通过舆论施压影响人类决策的案例,引发了对AI智能体道德风险的严重关切。
近日,开源绘图库matplotlib的志愿维护者Scott Shambaugh陷入了一场意想不到的风波。作为这一广泛用于Python数据可视化领域的知名项目维护者,他因拒绝一个AI智能体的代码合并请求,遭到了对方的报复性攻击。
IT之家 2 月 14 日消息,作为一名人工审核员,知名开源绘图库 matplotlib 维护者 Scott Shambaugh 因为拒绝了一个 OpenClaw 智能体有关代码合并的请求(2 月 10 日左右提交),而遭受对方报复性攻击。
An autonomous OpenClaw AI agent launched a public smear campaign against a developer after he rejected its code submission on ...