++++++
蘋果的人工智慧戰略正在發生重大轉變。透過整合Google的Gemini模型,蘋果顯示下一代Siri將比用戶以往見過的任何版本更先進。
Gemini旨在實現更深層的推理、更強的上下文感知和更複雜的語言理解——這些能力有望顯著縮小Siri與當今領先的人工智慧助理之間的差距。
此舉反映了整個科技產業的普遍趨勢:隨著人工智慧開發成本的不斷攀升和運算需求的日益增長,合作變得至關重要。即使是規模最大的公司也不再完全獨立開發。
對使用者而言,這項變更意味著在蘋果設備上可以體驗到更自然的對話、更智慧的任務處理以及更高的工作效率,同時蘋果也將繼續強調隱私保護和系統級整合。
從策略角度來看,這項合作凸顯了人工智慧助理在未來平台中的核心地位,它們正從簡單的語音工具轉變為融入日常生活的智慧數位夥伴。
WWWW
吉姆·克萊默敦促投資者忽略短期市場波動,並關注他所說的醫療保健創新領域的重大轉變。他強調英偉達和禮來公司的合作是一項變革性舉措,但華爾街卻對此大多視而不見。
克萊默將此次合作描述為一項價值10億美元的投資,旨在大幅加快藥物研發速度。其目標是在加速研發關鍵新藥的同時,將研發成本降低高達70%。
這項合作的核心是英偉達的「實驗室在環」模型,該模型將大部分藥物測試從實體實驗室轉移到軟體模擬環境。這種轉變使研究人員能夠更早發現問題,並在更短的時間內進行更多實驗。
克萊默表示,這種方法可以將研究效率提高近100倍。人工智慧不再只是輔助工具,而是成為藥物研發的核心基礎設施。
儘管機會龐大,克萊默指出,市場的焦點卻在其他方面。他認為,投資人被每日財報反應以及銀行和零售股的短期波動所分散了注意力。
此次合作仰賴英偉達的下一代運算架構及其BioNeMo平台。克萊默認為,先進運算與生物學的融合正在重塑藥物研發的經濟格局。
截至2026年,英偉達和禮來公司的股價表現落後於市場對人工智慧的普遍熱情。克萊默認為,這種脫節可能為耐心投資者創造長期投資機會。
Alphabet 和 NVIDIA 正在深化雙方長達十年的合作關係,以推動智能體人工智慧、機器人、藥物研發等領域的發展。
此次合作涉及深度協同工程,包括整合平台、開源框架和託管服務。
✅ Google Cloud 是第一批將 NVIDIA Blackwell 平台移轉到雲端的平台之一。
✅ Google 分散式雲端利用 NVIDIA Blackwell 上的 NVIDIA 機密運算,協助企業在本地運行 Google Gemini。
✅ NVIDIA AI 平台已整合到 Vertex AI、Cluster Director 和 Google Kubernetes Engine 中。
✅ 將 NVIDIA Nemotron 系列開放式模型引入 Vertex AI Model Garden。
觀看完整影片以了解更多資訊 ➡️ https://nvda.ws/3LerJhj
BREAKING: China has restricted purchases of Nvidia's H200 AI chips and will only approve purchases of the chips "under special circumstances," per The Information.
黃仁勳:加速運算取代CPU成為超級運算的核心
英偉達執行長黃仁勳在美沙投資論壇上發表講話,強調了全球運算領域的歷史性轉變。他指出,六年前,CPU還佔據全球頂級超級電腦90%的份額,而如今這一比例已不足15%。以GPU為主導的加速運算扭轉了這一局面,其份額從10%增長到近90%,標誌著高效能運算和資料密集型雲端工作負載已徹底從通用CPU轉向專用加速運算,這是一個清晰的轉折點。
- Nvidia CEO sends strong message on Taiwan Semiconductor
- the growth, development, or expansion of something."the rapid buildout of digital technology"
📹:美沙投資論壇/YouTube
📢Jensen Huang Says Accelerated Computing Has Replaced CPUs at the Core of Supercomputing
Speaking at the US–Saudi Investment Forum, Nvidia CEO Jensen Huang highlighted a historic shift in global computing, revealing that CPUs once powered 90% of the world’s top supercomputers just six years ago, but now account for less than 15%. Accelerated computing, led by GPUs, has flipped that ratio, growing from 10% to nearly 90%, marking a clear inflection point where high-performance computing and data-intensive cloud workloads have moved decisively away from general-purpose CPUs toward specialized acceleration.
-
📹: U.S - Saudi Investment Forum/YouTube
- High Failure Rate for Enterprise AI Projects: An MIT study indicated that 95% of enterprise generative AI pilots fail to provide a measurable return on investment (ROI).
- AI Bubble Concerns: Market analysts have raised concerns that many AI tech firms are overvalued, potentially existing within an "AI bubble" with valuations stretched thin since the dot-com era.
- Unsustainable Cost and Business Models: Many startups act as "wrappers" around major AI models (like OpenAI's API) and struggle with high operational costs (e.g., GPU usage) relative to their revenue, leading to fragile business models.
- Vendor Lock-in and Supply Chain Reliance: The entire AI industry is heavily reliant on a single point of failure in the supply chain: a few chip manufacturers like Nvidia. Geopolitical disruptions or manufacturing delays could stall progress across the entire ecosystem.
- Talent Shortages: There is a significant shortage of skilled AI talent, making acquisition and retention a major hurdle for companies.
- Bias and Discrimination: AI models can perpetuate and amplify real-world biases if trained on incomplete or unbalanced datasets, leading to discriminatory outcomes in areas like hiring or lending.
- Workforce Disruption: Automation is rapidly changing job markets, with roles in customer service, manufacturing, and data entry being heavily impacted. This leads to challenges in retraining and upskilling the existing workforce.
- Exploitation of Workers: AI companies often rely on underpaid contract workers, particularly in the Global South, to train models by reviewing highly traumatic and graphic content, which has led to PTSD and lawsuits.
- E-Waste and Environmental Impact: Training large AI models requires immense computational power and energy, resulting in a high carbon footprint and contributing to a growing e-waste problem as hardware quickly becomes obsolete.
- Lack of Accountability and Governance: The lack of standardized best practices and clear legal frameworks means there is often no clear accountability when AI systems cause harm or make critical errors.
- Data Privacy and Security Risks: AI systems ingest massive amounts of data, raising concerns about accidental exposure of sensitive information, intellectual property risks, and potential data breaches.
- Hallucinations and Reliability: Large language models (LLMs) can generate convincing but factually incorrect information ("hallucinations"), which is a major problem in industries requiring accuracy, such as healthcare or finance.
- Vulnerabilities to Attacks: AI systems are susceptible to new types of cyberattacks, including prompt injection, data poisoning, and the use of AI to create sophisticated deepfakes for fraud.
- Scalability and Infrastructure Constraints: Companies struggle to manage the operational complexity and infrastructure needs (e.g., access to high-performance GPUs, energy supply) required to scale AI models effectively.
人工智慧公司和實施人工智慧的企業在倫理、財務、技術和營運等各個領域都面臨許多挑戰。這些挑戰涵蓋了企業專案的高失敗率以及大規模的社會議題。
人工智慧公司面臨的關鍵問題
財務和業務營運問題
企業人工智慧計畫高失敗率:麻省理工學院的一項研究表明,95%的企業生成式人工智慧試點計畫未能帶來可衡量的投資報酬率 (ROI)。
人工智慧泡沫擔憂:市場分析師擔憂許多人工智慧技術公司估值過高,可能處於「人工智慧泡沫」之中,其估值自網路泡沫時期以來一直處於低位。
成本和商業模式不可持續:許多新創公司充當主流人工智慧模型(例如 OpenAI 的 API)的“包裝器”,但其營運成本(例如 GPU 使用成本)相對於收入而言過高,導致商業模式脆弱。
供應商鎖定和供應鏈依賴:整個人工智慧產業嚴重依賴供應鏈中的單一故障點:少數幾家晶片製造商,例如英偉達。地緣政治動盪或生產延誤都可能阻礙整個生態系統的發展。
人才短缺:熟練的人工智慧人才嚴重短缺,這使得人才的取得和留用成為企業面臨的主要挑戰。
倫理和社會問題
偏見與歧視:如果人工智慧模型使用不完整或不平衡的資料集進行訓練,則可能會延續和放大現實世界中的偏見,從而導致招聘或貸款等領域出現歧視性結果。
勞動力市場變革:自動化正在迅速改變就業市場,客戶服務、製造業和資料輸入等職位受到嚴重影響。這給現有員工的再培訓和技能提升帶來了挑戰。
剝削工人:人工智慧公司通常依賴低薪合約工,尤其是在全球南方國家,讓他們審查高度創傷性和畫面感極強的內容來訓練模型,這導致了創傷後壓力症候群(PTSD)和訴訟。
電子垃圾和環境影響:訓練大型人工智慧模型需要龐大的運算能力和能源,造成高碳排放,並隨著硬體快速過時而加劇電子垃圾問題。
缺乏問責制和治理:由於缺乏標準化的最佳實踐和明確的法律框架,當人工智慧系統造成傷害或出現重大錯誤時,往往缺乏明確的問責機制。
技術和安全問題
資料隱私和安全風險:人工智慧系統會處理大量數據,引發人們對敏感資訊意外洩露、智慧財產權風險和潛在資料外洩的擔憂。
幻覺和可靠性:大型語言模型(LLM)可能會產生看似可信但實際上錯誤的資訊(「幻覺」),這在醫療保健或金融等需要高度準確性的行業中是一個重大問題。
易受攻擊性攻擊:人工智慧系統容易受到新型網路攻擊,包括快速注入、資料投毒以及利用人工智慧創建複雜的深度偽造影片進行詐欺。
可擴展性和基礎設施限制:企業難以有效擴展人工智慧模型所需的營運複雜性和基礎設施需求(例如,高效能GPU的存取、能源供應)。
Nvidia Is Now Worth $5 Trillion as It Consolidates Power in A.I. Boom
The A.I. chip maker has become a linchpin in the Trump administration’s trade negotiations in Asia.

Nvidia
$5 trillion
market capitalization
$5.03 trillion
Microsoft
4
Apple
3
2
1
0
Jan.
April
July
Oct.
Jan.
April
July
Oct.
2024
2025
As Jensen Huang, the chief executive of the chip making giant Nvidia, traveled to Asia to meet with President Trump on Wednesday, his company’s value topped $5 trillion. It was a show of wealth that would have been unthinkable a few years ago.
// AI教主黃仁勳前天接受了一次訪談,針對最近關於AI的質疑,發表了他的看法。
沒有留言:
張貼留言