Home » Technology » Ex-Intel CEO Picks China’s DeepSeek Over OpenAI for New Startup!

Ex-Intel CEO Picks China’s DeepSeek Over OpenAI for New Startup!

Photo of author

By Harper Westfield

Ex-Intel CEO Picks China’s DeepSeek Over OpenAI for New Startup!

Photo of author

By Harper Westfield

DeepSeek R1, an innovative Chinese open-source large language model, is gaining recognition in the technology sector for its outstanding performance, which surpasses OpenAI’s model while consuming significantly fewer computational and training resources. Pat Gelsinger, former CEO of Intel, recently expressed his admiration for the model on social media platform X. In a discussion with TechCrunch, he disclosed that his new startup would adopt DeepSeek R1 instead of OpenAI’s offering.

For those out of the loop, Pat Gelsinger resigned from his position as CEO of Intel last month, with his departure followed by the appointment of two interim CEOs. There are rumors that disappointing performance in the stock market played a part in his exit, potentially influenced by the board’s decision. Despite this, Gelsinger is now the chairman of his new venture, Gloo, which is said to be a communication platform tailored for churches.

Gloo is currently developing an AI-driven service named “Kallm,” described as an AI-powered chatbot. After evaluating both DeepSeek’s R1 and OpenAI’s o1 models, Gelsinger found the former to be a better match for Gloo, thanks to its open-source nature and ease of integration. “My team at Gloo is working with R1 today, though they had the option to integrate o1 — but only through its APIs,” Gelsinger explained. He further praised DeepSeek on X, appreciating the company for making AI more accessible and fostering competition.

DeepSeek utilized Nvidia’s H800 GPUs for training its R1 model, but primarily employs Huawei Ascend AI accelerators (likely the Huawei 910C) for inference tasks to cut costs and lessen reliance on Western technology. Sam Altman once remarked that AI startups with $10 million are “completely hopeless,” yet DeepSeek managed to spend just $5.6 million on training, requiring 11x less compute power than Meta’s Llama 3 405b model.

See also  Man Who Lost $780 Million in Bitcoin Plans to Buy Landfill Before Closure

While some industry specialists have questioned these figures, Gelsinger maintains, “You’ll never get complete transparency, especially since most of the development occurred in China. However, all signs indicate that their training costs are 10-50x lower than those of o1.” He emphasized how DeepSeek is challenging the industry to adopt open-source solutions and innovate instead of solely relying on hardware upgrades.

DeepSeek is currently under scrutiny by Microsoft and OpenAI for allegedly incorporating data from ChatGPT improperly in its model development. Additionally, DeepSeek has been open about its practice of collecting vast amounts of user data and storing it on servers located in China.

More about artificial intelligence

Moore Threads GPUs reportedly exhibit ‘excellent’ inference performance with DeepSeek models


Huawei

Research indicates Huawei’s Ascend 910C achieves 60% of Nvidia H100’s inference performance


Tech Deals

HyperX’s Cloud III gaming headset is now only $65 — Enjoy quality sound on a budget

See more latest

Similar Posts

Rate this post
Share this :

Leave a Comment