OpenAI has launched a more powerful version of the o1 "inference" artificial intelligence model o1-pro in its developer API. According to OpenAI, o1-pro uses more computation than o1 to provide "consistently better responses." For now, it's only available to select developers -- those who have spent at least $5 on the OpenAI API service, which comes at a hefty price. Very expensive.

OpenAI charges $150 per million words input into the model (approximately 750,000 words) and $600 per million words generated by the model. This is equivalent to twice the price of OpenAI's GPT-4.5 and 10 times the price of ordinary o1.

OpenAI believes that the improved performance of o1-pro will convince developers to pay this expensive fee.

A spokesperson for OpenAI said: "O1-pro in the API is a version of o1 that uses more calculations to think harder and provide better answers to the hardest questions. After receiving many requests from the developer community, we are excited to bring it to the API to provide more reliable responses."

However, early impressions of the O1-Pro are not incredible. Users found that the model struggled with Sudoku puzzles and would overturn on simple optical illusion jokes.

Additionally, some internal benchmarks conducted by OpenAI late last year showed that o1-pro performed only slightly better than the standard version of o1 on coding and math problems. However, these benchmarks found that the o1-pro was indeed more reliable in answering these questions.