DeepSeek limits access to AI model as demand strains capacity

DeepSeek limits access to AI model as demand strains capacity

US officials are also probing whether DeepSeek acquired Nvidia semiconductors through third parties in Singapore, circumventing restrictions on AI chip exports to China.

DeepSeek, whose rock-bottom pricing had alarmed some rivals, said in the posting that discounts for access to its model would end on Feb 8.
BEIJING:
DeepSeek, the Chinese startup whose artificial intelligence (AI) model roiled global markets last week, said it would restrict access to its application programming interface service because of shortages with its server capacity.

In a posting on its website, the company said it suspended the ability of customers to top up their API credits to avoid any broader impact on their services.

“Any existing stored values won’t be affected.

“The existing recharge amount can continue to be used, please understand!,” the company said.

DeepSeek’s services have been overwhelmed with demand since late January after it unveiled an AI chatbot that it says can rival OpenAI’s ChatGPT and was developed at a fraction of the cost of competing products.

It had previously restricted signups for new users to people with a mainland China telephone number.

While the hype around the Chinese company’s latest AI model sparked a US$1 trillion rout in US and European technology stocks, it also sparked US efforts to close loopholes to its restrictions on sales of chips used for AI applications.

American officials are probing whether DeepSeek acquired Nvidia Corp semiconductors through third parties in Singapore, circumventing restrictions on the export of AI chips to China, people familiar with the matter told Bloomberg earlier.

DeepSeek, whose rock-bottom pricing had alarmed some rivals, also said in the posting that discounts for access to its model would end on Feb 8.

After that, it said that access to the chat model would be ¥2 per million input tokens and ¥8 per million output tokens.

When its reasoning model goes online, the charges will be ¥4 per million input tokens and ¥16 per million output tokens.

Stay current - Follow FMT on WhatsApp, Google news and Telegram

Subscribe to our newsletter and get news delivered to your mailbox.