Anthropic taught its chatbot Claude to analyze 150 thousand words at a time

by alex

Anthropic has released Claude 2.1. The latest version of competitor ChatGPT increases the context window to 200,000 tokens (Anthropic claims that 200,000 tokens is approximately 150,000 words or more than 500 pages of material). The company said version 2.1 also halved the frequency of hallucinations, leading to fewer false responses.

The company says Claude 2.1's 200K token context window allows users to download entire databases, scientific papers, financial reports or long literary works. After downloading the material, the chatbot can provide a summary, answer specific questions about its content, compare/contrast multiple documents, or recognize patterns that may be difficult for a human to see, Engadget reports.

Anthropic warns that parsing and responding to very long inputs can take an AI bot several minutes—far longer than the seconds it takes for simpler queries. Hallucinations or deliberately inaccurate information are still common in this generation of AI chatbots. However, Anthropic claims that Claude 2.1 reduced the level of hallucinations by half compared to Claude 2.0. The company attributes this progress in part to an improved ability to separate incorrect statements from admissions of uncertainty, making Claude 2.1 about twice as likely to admit that it doesn't know the answer than to answer incorrectly.

READ
OpenAI-funded 1X humanoid robots make impressive progress in autonomous operation (video)

Vacancies

C++ Game Developer Playwing

Graphic and motion designer WhiteBIT

CSR ENG (Shared Team) #14 WOW 24-7.io

English course. Beginnings for various purposes and equals: working English, cob rhubarb, courses for children and teenagers. Find out about the course

Anthropic claims that Claude 2.1 also makes 30% fewer errors on very long documents. It also has a three to four times lower rate of “falsely concluding that a document supports a particular claim.”

You may also like

Leave a Comment