DeepSeek, the Chinese AI startup, is facing increasing scrutiny and accusations of technology copying. Following allegations from competitors like OpenAI as well as US President Donald Trump’s AI advisor David Sacks, Indian-American billionaire Vinod Khosla has now accused DeepSeek of “ripping off” the technology behind its R1 model.
As per Khosla, its DeepSeek chatbot makes the same mistakes as OpenAI’s o1, raising suspicion of intellectual property theft.
“One of our startups found Deepseek makes the same mistakes O1 makes, a strong indication the technology was ripped off,” Khosla said.
“It feels like they [DeepSeek] then hacked some code and did some impressive optimizations on top. Most likely, not an effort from scratch,” Khosla added.
The company’s R1 large language model (LLM) rose to prominence after reports said that it delivers performance comparable to its US rivals but was developed at a fraction of the cost. DeepSeek’s rapid rise and cost-effective approach have disrupted the AI landscape, wiping out an estimated $1 trillion from US tech stocks this week.
The success has fueled speculation that the company may have achieved its breakthrough by copying or reverse-engineering existing technologies. There is, however, no concrete evidence to support the allegations.
OpenAI and Microsoft are investigating whether Chinese artificial intelligence company DeepSeek leveraged OpenAI’s models to train its own AI, according to a Bloomberg report. The inquiry comes after DeepSeek, known for its cost-effective AI development, introduced models that compete with OpenAI’s flagship offerings, triggering concerns about potential intellectual property violations.
According to a Bloomberg report, OpenAI and Microsoft are investigating whether the Chinese AI firm improperly used OpenAI’s technology to develop its own competing AI models. The investigation has reportedly been triggered by concerns that DeepSeek may have violated OpenAI’s terms of service by extracting data from its models to train its own.
Sources told the publication that Microsoft’s security teams detected suspicious data activity in late 2024, potentially involving large-scale data extraction from OpenAI through developer accounts linked to DeepSeek.
OpenAI also reportedly found evidence of AI model distillation, a process where smaller AI models are trained using data from larger, more powerful models.
The release of a less capital-intensive artificial intelligence model from China’s DeepSeek sent a chill through the U.S. stock market this week, headlined by
The emergence of DeepSeek's lower cost breakthrough particularly threatens US-based AI
American teens have lost their faith in Big Tech, according to a new report from Common Sense Media, a nonprofit offering reviews and ratings for media and tec
New York CNN — Silicon Valley is coming to grips this week with the realization that