Amazon has started training Olympus artificial intelligence – it has twice as many parameters as GPT-4

by alex

Amazon is investing millions in training an ambitious large language model (LLM), with hopes it can compete with OpenAI and Alphabet (Google).

The model, codenamed Olympus, has 2 trillion parameters, which could make it one of the largest trainable models, according to Reuters people familiar with the matter. OpenAI's GPT-4 model reportedly has one trillion parameters.

The team is led by Rohit Prasad, a former Alexa executive who now reports directly to CEO Andy Jasse. As chief artificial general intelligence (AI) scientist at Amazon, Prasad brought researchers working on Alexa AI and Amazon's science team to work on learning models, bringing together efforts across the company.

Amazon has already explored smaller models such as the Titan. The company also partners with startups developing artificial intelligence models, such as Anthropic and AI21 Labs, to offer them to Amazon Web Services (AWS) users.

A very popular Changan crossover, but not problem-free. 200 thousand Changan CS55 Plus crossovers are recalled in China

Amazon believes that having its own models can make AWS's offerings more attractive because enterprise customers want access to the best models. However, there are no specific release dates for the new model yet.

LLMs (Large Language Models) are the technology behind artificial intelligence tools that learn from massive amounts of data to generate human-like responses. The larger the AI model, the more expensive it costs to train, given the amount of computing power required. In an April call, Amazon executives said the company would increase investments in LLM and generative AI while cutting costs in its retail business.


Quality Assurance Specialist Mellanni

Sr. Amazon PPC Manager Mellanni

C++ Game Developer Playwing

You may also like

Leave a Comment