Top

OpenAI launches project Foundry to rent machines and run ChatGPT

OpenAI has debuted a new developer platform that allows customers to run the company’s new machine learning models, such as GPT-3.5, on dedicated hardware. Its name is Foundry. In screenshots of documentation posted on Twitter by users with early access, OpenAI describes the offering as ‘designed for cutting-edge customers running larger workloads’. Although there is the benefit of the doubt, Foundry will, each time it is launched, provide a ‘static allocation’ of computing capacity (perhaps on Azure, OpenAI’s preferred public cloud platform) dedicated to a single customer. Users will be able to monitor specific instances with the same tools and dashboards used by OpenAI to create and optimize models.

In addition, Foundry will provide a certain level of control, allowing users to decide whether or not to upgrade to newer model versions and a ‘more robust’ tune-up for the latest OpenAI models. Foundry will also offer customized service programs, including dedicated technical support.

chatgpt foundry
chatgpt foundry

Not for everyone

It must be said that all this is aimed at companies and certainly not at individual consumers. Running a light version of GPT-3.5 will cost $78,000 for a three-month commitment or $264,000 for a one-year commitment. To put that in perspective, one of Nvidia’s next-generation supercomputers, the DGX Station, costs $149,000 per unit. Attentive users on Twitter and Reddit noted that one of the next-generation models listed in the instance price table has a maximum context window of 32k. The context window refers to the text the model considers before generating additional text; longer context windows allow the model to ‘remember’ more text.

GPT-3.5, OpenAI’s latest text generation model, has a maximum context window of 4k, suggesting that this mysterious new model could be the long-awaited GPT-4 or something even later. OpenAI is under increasing pressure to become profitable after a multi-billion dollar investment by Microsoft. According to reports, the company expects to earn $200 million in 2023, a pittance compared to the more than $1 billion allocated to the start-up. Processing costs are largely what take the most resources away from the organization. The formation of cutting-edge artificial intelligence models can cost up to millions of dollars, and running them is generally relatively cheap. According to OpenAI co-founder and CEO Sam Altman, running ChatGPT, OpenAI’s viral chatbot costs a few cents per chat, a not insignificant figure considering ChatGPT had over one million users last December.

In moves towards monetization, OpenAI has recently launched a ‘pro’ version of ChatGPT, ChatGPT Plus, starting at $20 per month and has partnered with Microsoft to develop Bing Chat, a controversial chatbot (to put it mildly) that has caught the public’s attention. According to The Information, OpenAI plans to introduce a ChatGPT mobile app in the future and bring its AI language technology into Microsoft apps such as Word, PowerPoint and Outlook. But OpenAI itself continues to make its technology available through Microsoft’s Azure OpenAI service, a business-focused model services platform and maintains Copilot, a premium code generation service developed in collaboration with GitHub.

Antonino Caffo has been involved in journalism, particularly technology, for fifteen years. He is interested in topics related to the world of IT security but also consumer electronics. Antonino writes for the most important Italian generalist and trade publications. You can see him, sometimes, on television explaining how technology works, which is not as trivial for everyone as it seems.