Groq and Humain offer new open models from Openai from day one

Available worldwide with real-time performance, low costs and local support in Saudi Arabia

Groq, the pioneer of the fast inference, and Humain, a PIF company and Saudi Arabia of leading AI service providers, announced the immediate availability of the two open models from Openai on Groqcloud today. With the market launch, GPT-OS-1220B and GPT-OS-20B with a full 128K context, real-time and integrated server-side tools are provided live from day one on the optimized inference platform from Groq.

GroQ has long supported the open source efforts of Openai, including the large-scale use of Whisper. This market launch is based on this basis and brings its latest models with global access and local support from Humain in production.

“Openai sets a new high-performance standard for open source models,” said Jonathan Ross, CEO from Groq. “GroQ was developed to perform such models quickly and inexpensively so that developers can use them from the first day. Working with Humain strengthens local access and support in the Kingdom of Saudi Arabia and enables developers in the region to build more intelligent and faster.”

“Groq offers the unsurpassed inference speed, scalability and cost efficiency that we need to bring the latest AI into the kingdom,” said Tareq Amin, CEO of Humain. “Together we enable a new wave of Saudi innovations-driven by the best open source models and the infrastructure to scale them worldwide. We are proud to support the leader from Openai in the area of open source AI.”

Built for full model ability

In order to optimally use the new OpenAi models, GroQ offers extended context and integrated tools such as code execution and web search. The web search helps to provide relevant information in real time, while the code execution enables logical thinking and complex work processes. The Groq platform offers these skills from the first day with a full 128k token context length.

Unsurpassed price-performance ratio

The specially developed stack from GroQ offers the lowest costs per token for the new models from Openai and at the same time ensures speed and accuracy.

GPT-OS-120B is currently running with 500+ t/s and GPT-OS-20B is currently running with 1000+ t/s On Groqcloud.

GroQ offers the latest open models from Openai at the following prices:

  • GPT-OS-12120B: 0.15 USD / m input token and 0.75 USD / m output token
  • GPT-OS-20B: 0.10 USD / m input token and 0.50 USD / m output token

Note: For a limited time, tool calls that are used with the open models from Openai are not calculated. You can find more information at groq.com/pricing.

Global from day one

The global presence of GroQ with data centers in North America, Europe and the Middle East ensures a reliable and powerful AI inference, wherever developers work. With Groqcloud, the open models from Openai are now worldwide Available with minimal latency.

Information about Groq

GroQ is the AI inferiority platform that redefines the price performance. The tailor -made LPU and the cloud were specially developed to carry out powerful models immediately, reliably and at the lowest costs per token – without compromises. More than 1.9 million developers trust GroQ to develop quickly and scale more intelligently.

Information on Humain

www.humain.ai

slot demo

demo slot

link slot gacor

demo slot x500

By adminn