DeepSeek, the Chinese AI startup, is facing increasing scrutiny and accusations of technology copying. Following allegations from competitors like OpenAI as well as US President Donald Trump’s AI advisor David Sacks, Indian-American billionaire Vinod Khosla has now accused DeepSeek of “ripping off” the technology behind its R1 model.
As per Khosla, its DeepSeek chatbot makes the same mistakes as OpenAI’s o1, raising suspicion of intellectual property theft.
“One of our startups found Deepseek makes the same mistakes O1 makes, a strong indication the technology was ripped off,” Khosla said.
“It feels like they [DeepSeek] then hacked some code and did some impressive optimizations on top. Most likely, not an effort from scratch,” Khosla added.
The company’s R1 large language model (LLM) rose to prominence after reports said that it delivers performance comparable to its US rivals but was developed at a fraction of the cost. DeepSeek’s rapid rise and cost-effective approach have disrupted the AI landscape, wiping out an estimated $1 trillion from US tech stocks this week.
The success has fueled speculation that the company may have achieved its breakthrough by copying or reverse-engineering existing technologies. There is, however, no concrete evidence to support the allegations.
OpenAI and Microsoft are investigating whether Chinese artificial intelligence company DeepSeek leveraged OpenAI’s models to train its own AI, according to a Bloomberg report. The inquiry comes after DeepSeek, known for its cost-effective AI development, introduced models that compete with OpenAI’s flagship offerings, triggering concerns about potential intellectual property violations.
According to a Bloomberg report, OpenAI and Microsoft are investigating whether the Chinese AI firm improperly used OpenAI’s technology to develop its own competing AI models. The investigation has reportedly been triggered by concerns that DeepSeek may have violated OpenAI’s terms of service by extracting data from its models to train its own.
Sources told the publication that Microsoft’s security teams detected suspicious data activity in late 2024, potentially involving large-scale data extraction from OpenAI through developer accounts linked to DeepSeek.
OpenAI also reportedly found evidence of AI model distillation, a process where smaller AI models are trained using data from larger, more powerful models.
BRUSSELS (Reuters) - Europe's new tech rule aims to keep digital markets
Recent changes in US H-1B visa policies have sparked significant concern within the Indian IT professional community hoping to work in America. However, the a
Chinese tech stocks have gained over 40% this year, adding $439 billion in valueChina’s “7 titans” are outperforming the US “Magnificent Seven” tech s
An increasing number of countries in recent years have begun targeting America’s leading technology firms with policies touted as measures to promote fair com