到 2010 年代初,研究人員開始證明神經網路可以為包括人臉辨識系統、數位助理和自動駕駛汽車在內的各種技術提供支援。
By the early 2010s, researchers had begun to show that neural networks could power a wide range of technologies, including face recognition systems, digital assistants and self-driving cars.An A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead End
Dr. Yann LeCun 勒昆博士,人工智慧先驅 , 65 歲 談 :科技「大軍」走錯了方向,正走向死胡同。他的新創公司試圖預測人工智慧行為的後果tried to predict the outcome of its actions. 。他說,這將取得更大進展。 大型語言模式 (LLM) 的限制: 缺乏真正的推理能力。為何語言能力 ≠ 智能。......
Yann LeCun helped create the technology behind today’s chatbots. Now he says many tech companies are on the wrong path to creating intelligent machines.
勒昆博士一再強調,開源是最安全的方式。這意味著沒有任何一家公司能夠控制這項技術,任何人都可以使用這些系統來識別和應對潛在風險。
Dr. LeCun repeatedly argued that open source was the safest path. It meant that no one company would control the technology and that anyone could use these systems to identify and fight potential risks.
Yann LeCun helped create the technology behind today’s chatbots. Now he says many tech companies are on the wrong path to creating intelligent machines.
勒昆博士一再強調,開源是最安全的方式。這意味著沒有任何一家公司能夠控制這項技術,任何人都可以使用這些系統來識別和應對潛在風險。
Dr. LeCun repeatedly argued that open source was the safest path. It meant that no one company would control the technology and that anyone could use these systems to identify and fight potential risks.
----
在德里舉行的AI影響力高峰會上,Yoshua Bengio表示,人工智慧系統應該在沒有任何目標的情況下進行預測。
他認為,目標會賦予系統驅動力和慾望,可能以危險的方式影響系統。
他聲稱,合適的模板是理想化的人類科學家。
我完全不同意這個前提。
我認為任何系統如果沒有目標就無法做任何有用的事情。
我們一致認為,LLM(語言學習模型)本質上是不安全的。但它們不安全的原因恰恰在於它們沒有任何目標,只是模仿那些創作了訓練文本的人類。
我的建議與Yoshua的觀點截然相反。
人工智慧系統*應該*有目標。它們的設計應該使其只能完成我們賦予它們的目標。
當然,這些目標必須包含安全防護措施。
但關鍵在於,根據設計,該系統*必須*實現我們設定的目標,並且*必須*遵守安全防護措施的限制。
我稱之為目標驅動型人工智慧架構。
(At the AI Impact Summit in Delhi, Yoshua Bengio says that AI systems should make predictions without any goal.
He says goals would bias the systems in possibly dangerous ways by giving it drives and desires.
He claims that the proper template to use is idealized human scientists.
I completely disagree with the whole premise.
I don't think any system can do anything useful without an objective.
One point we agree on is that LLMs are intrinsically unsafe. But they are unsafe precisely because they don't have any objectives and merely emulate the humans who produced the text they've been trained on.
My recommendation is the exact opposite of Yoshua's.
AI systems *should* have goals. They should be designed so that they can do nothing else but fulfilling the goals we give them.
Naturally, these goals and objectives must include safety guardrails.
But the point is that, by construction, the system *must* fulfill the goal we give it and *must* abide by the safety guardrail constraints.
I call this objective-driven AI architectures.)
沒有留言:
張貼留言