This is OpenAI’s secret Project Strawberry

OpenAI wants to further expand the logical thinking ability of its own AI models – and is relying on a top-secret project called Strawberry to do so. Meanwhile, the company is being criticized for taking action against potential statements by whistleblowers in an allegedly illegal manner.

OpenAI is working on a new project codenamed Strawberry, which aims to improve the ability of AI models to reason. The reports Reuters citing a person familiar with the situation and internal documents. The latter clarify the objectives of the Strawberry project, but not the specific way in which they are to be implemented.

The project is currently still in the development process. More precise information, such as when Strawberry is expected to be launched, is not yet known. Even within the company, the project is a “closely guarded secret”.

Strawberry can do

According to AI experts, the ability to think logically is essential to achieving human or even superhuman intelligence. Project Strawberry is intended to enable OpenAI’s AI tools not only to generate prompt answers based on the information available from training, but also to search the Internet independently and thus conduct “deep research”. ChatGPT has been able to search the Internet since September 2023, but only when requested and with sometimes inappropriate results.

A company spokesperson did not respond to Reuters’ specific questions about Strawberry, but indirectly confirmed that the company wants to equip its own AI models with better logical thinking skills:

We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time.

According to Reuters, Project Strawberry was previously known as Q. Demos showed the model’s ability to answer complicated scientific and mathematical questions – this is also likely to be the goal of Strawberry. OpenAI recently presented a project at an internal meeting that is supposed to have human-like logical thinking skills. However, it is unclear whether this is Strawberry.

Is OpenAI taking illegal action against whistleblowers?

While OpenAI is continuously expanding the capabilities of its own models, concerns are also growing about potentially negative and far-reaching consequences of the use of artificial intelligence. It has now become known that OpenAI allegedly wanted to prevent its own employees from warning regulators about the risks of its own technology. This is the result of a complaint by several whistleblowers to the US Securities and Exchange Commission (SEC). which the Washington Post has.

Post by @crumbler

View on Threads

With non-disclosure agreements (NDAs) and other measures, employees say they were asked, among other things, to forgo compensation they would have been entitled to as whistleblowers and to seek the company’s consent before publishing information. One of the people affected told the Washington Post:

These contracts sent a message that ‘we don’t want (…) employees talking to federal regulators. (…) I don’t think that AI companies can build technology that is safe and in the public interest if they shield themselves from scrutiny and dissent.

According to the complaint, the contracts used by OpenAI violate long-standing federal laws. The employees are now calling on the SEC to take legal action. OpenAI itself, however, emphasized that the company’s “whistleblower policies” would protect the rights of its employees.


With ChatGPT, Dream Machine and Co. –
Creator creates song and music video completely with AI

© Arata Fukoe, Screenshot, via Canva

Source: onlinemarketing.de