2026年1月26日 星期一

Dr. Yann LeCun 勒昆博士,人工智慧先驅 , 65 歲 談 :科技「大軍」走錯了方向,正走向死胡同。他的新創公司試圖預測人工智慧行為的後果tried to predict the outcome of its actions. 。他說,這將取得更大進展。 大型語言模式 (LLM) 的限制: 缺乏真正的推理能力。為何語言能力 ≠ 智能。...... 7位得主齊獲殊榮 The winners of the 2025 Queen Elizabeth Prize for Engineering were awarded to seven individuals for their seminal contributions to the development of modern machine learning,

他的新創公司試圖預測人工智慧行為的後果tried to predict the outcome of its actions. 。他說,這將取得更大進展。

 That, he said, would allow A.I. to progress beyond the status quo. His new start-up will continue that work.

人工智慧先驅警告:科技「大軍」走錯了方向正走向死胡同。


他表示,原因在於他多年來一直強調的觀點:大型語言模型(LLM),也就是ChatGPT等熱門產品的核心人工智慧技術,其能力終究有限。而各公司卻把所有資源都投入到那些無法達成目標的專案中,也就是讓電腦達到甚至超越人類的智慧水準。他補充說,更具創新精神的中國公司或許能夠率先實現這一目標。

The reason, he said, goes back to what he has argued for years: Large language models, or L.L.M.s, the A.I. technology at the heart of popular products like ChatGPT, can get only so powerful. And companies are throwing everything they have at projects that won’t get them to their goal to make computers as smart as or even smarter than humans. More creative Chinese companies, he added, could get there first.



到 2010 年代初,研究人員開始證明神經網路可以為包括人臉辨識系統、數位助理和自動駕駛汽車在內的各種技術提供支援。
By the early 2010s, researchers had begun to 
show that neural networks could power a wide range of technologies, including face recognition systems, digital assistants and self-driving cars. 

An A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead End

Yann LeCun helped create the technology behind today’s chatbots. Now he says many tech companies are on the wrong path to creating intelligent machines.
勒昆博士一再強調,開源是最安全的方式。這意味著沒有任何一家公司能夠控制這項技術,任何人都可以使用這些系統來識別和應對潛在風險。

Dr. LeCun repeatedly argued that open source was the safest path. It meant that no one company would control the technology and that anyone could use these systems to identify and fight potential risks.





Videos



Ecstatic King Charles Presents Queen Elizabeth Prize For ...
YouTube · The Royal Family Channel
12 hours ago

AI Overview




The winners of the 2025 Queen Elizabeth Prize for Engineering were awarded to seven individuals for their seminal contributions to the development of modern machine learning, a core component of artificial intelligence (AI) advancements.
The 2025 laureates, who share the £500,000 prize, are:Dr. Bill Dally
Dr. Fei-Fei Li
Professor Geoffrey Hinton
Professor John Hopfield
Jensen Huang
Dr. Yann LeCun
Professor Yoshua Bengio
Their combined work laid the conceptual and hardware foundations for modern machine learning and AI, including the development of artificial neural networks, essential high-performance computing hardware (GPUs), and high-quality datasets like ImageNet which are critical for training AI systems.
The winners were announced in February 2025, and His Majesty King Charles III presented the award during a ceremony in November 2025.

Yann LeCun 談大型語言模式 (LLM) 1.“LLM 會一個接一個地生成詞元。它生成一個詞元需要進行固定量的計算,這顯然屬於系統 1——它是被動的,沒有推理能力。” —— 解釋 LLM 為何缺乏真正的推理能力。 2.“LLM 很棒,它們很有用,我們應該投資它們——很多人會使用它們……但它們並非通往人類智能的途徑。它們只是不是。現在,它們佔據了所有資源——基本上沒有資源用於其他任何事情。” —— 解釋為何業界對 LLM 的執著是錯誤的。 3.“語言具有強大的統計特性……這就是為什麼我們擁有能夠通過律師資格考試或計算積分的系統,但我們的家用機器人在哪裡?在現實世界中,一隻貓的表現仍然遠遠超過它們。” —— 解釋為何語言能力 ≠ 智能。 4. “在通往人類水平人工智慧的道路上,大型語言模型基本上是一個出口——一個幹擾因素,一條死路。” ——論大型語言模式作為人工智慧研究中的一條演化死胡同。 ——圖片來源:印度時報,Gadgets Now Mark Bishop 已註明出處😜 原文連結: https://timesofindia.indiatimes.com/....../125428070.cms Yann LeCun tán dàxíng yǔyán móshì (LLM) 1.“LLM huì yīgè jiē yīgè dì shēngchéng cí yuán. Tā shēngchéng yīgè cí yuán xūyào jìnxíng gùdìng liàng de jìsuàn, zhè xiǎnrán shǔyú xìtǒng 1——tā shì bèidòng de, méiyǒu tuīlǐ nénglì.” —— Jiěshì LLM wèihé quēfá zhēnzhèng de tuīlǐ nénglì. 2.“LLM hěn bàng, tāmen hěn yǒuyòng, wǒmen yīnggāi tóuzī tāmen——hěnduō rén huì shǐyòng tāmen……dàn tāmen bìngfēi tōng wǎng rénlèi zhìnéng de tújìng. Tāmen zhǐshì bùshì. Xiànzài, tāmen zhànjùle suǒyǒu zīyuán——jīběn shàng méiyǒu zīyuán yòng yú qítā rènhé shìqíng.” —— Jiěshì wèihé yèjiè duì LLM de zhízhuó shì cuòwù de. 3.“Yǔyán jùyǒu qiángdà de tǒngjì tèxìng……zhè jiùshì wèishéme wǒmen yǒngyǒu nénggòu tōngguò lǜshī zīgé kǎoshì huò jìsuàn jīfēn de xìtǒng, dàn wǒmen de jiāyòng jīqìrén zài nǎlǐ? Zài xiànshí shìjiè zhōng, yī zhī māo de biǎoxiàn réngrán yuǎn yuǎn chāoguò tāmen.” —— Jiěshì wèihé yǔyán nénglì ≠ zhìnéng. 4. “Zài tōng wǎng rénlèi shuǐpíng réngōng zhìhuì de dàolù shàng, dàxíng yǔyán móxíng jīběn shàng shì yīgè chūkǒu——yīgè gànrǎo yīnsù, yītiáo sǐlù.” ——Lùn dàxíng yǔyán móshì zuòwéi réngōng zhìhuì yánjiū zhōng de yītiáo yǎnhuà sǐhútòng. ——Túpiàn láiyuán: Yìndù shíbào,Gadgets Now Mark Bishop yǐ zhùmíng chūchù 😜 yuánwén liánjié: Https://Timesofindia.Indiatimes.Com/....../125428070.Cms Show more

Yann LeCun on Large Language Models (LLMs)
1. “An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1 — it’s reactive. There’s no reasoning.”
— On why LLMs lack genuine reasoning capacity.
2. “LLMs are great, they’re useful, we should invest in them — a lot of people are going to use them … But they are not a path to human-level intelligence. They’re just not. Right now, they’re sucking the air out of the room — there’s basically no resources for anything else.”
— On why industry obsession with LLMs is misplaced.
3. “Language has strong statistical properties… That’s why we have systems that can pass the bar exam or compute integrals, but where is our domestic robot? A cat still vastly outperforms them in the real world.”
— On why language competence ≠ intelligence.
4. “On the highway toward human-level AI, a large language model is basically an off-ramp — a distraction, a dead end.”
— On LLMs as an evolutionary cul-de-sac in AI research.
———
Image credit: Times of India, Gadgets Now
Mark Bishop i credited it 😜
Here’s the source article:



黃仁勳與AI先驅同列 7位得主齊獲殊榮

今年伊莉莎白女王工程獎的7位得主除有黃仁勳,還有華裔美籍科學家李飛飛,她是獲獎者中唯一的女性。另有NVIDIA首席科學家比爾達利(Bill Dally)、92歲的AI先驅約翰霍普菲爾德(John Hopfield)、喬書亞本吉奧(Yoshua Bengio)、傑佛瑞辛頓(Geoffrey Hinton)與Meta首席科學家楊立昆。他們被表彰為「讓電腦模仿人腦運作、進而發展出現代機器學習模型」的奠基者。

沒有留言:

網誌存檔