Meta Releases AI Model that Can Check Other AI Models’ Work

Photo of author

By Hiba Akbar

Meta released the new batch of AI models along with a “Self-Taught Evaluator” that can be employed to check other models, such as ChatGPT’s work, for accuracy.

Key Takeaways

  • Meta released new AI models on Friday.
  • They introduced the evaluation model, aiming to reduce human involvement.
  • It aims to solve the problem of outdated and inaccurate information produced by AI models.
  • Researchers expect that by 2030 we’ll be able to benefit from this model.
  • Meta has also released an improved version of its image-identification model and tool to improve response time.
Meta logo

On Friday, Meta announced the release of a new batch of AI models. They introduced the innovation “Self-Taught Evaluator” that walks on the path to reduce human involvement in the AI generation process.

Meta initially described the tool in an August paper, and it has now been released.

According to Meta, it seeks to solve one of the most common concerns about AI models such as ChatGPT, as it frequently provides incorrect or out-of-date information and responses to queries.

Meta described how the tool leverages the same “chain of thought” technique to create reliable assessments of the models’ responses.

Here they talk about the technique of dividing complex problems into small logical parts to improve the accuracy of solutions in complex fields such as physics, math, and coding.

The researcher working on this model gathers their data using AI to train the evaluator, making it easier to rely on AI, leaving human involvement behind.

Two Meta researchers working on the project told Reuters that the capacity to utilize AI to properly evaluate AI hints at a possible path toward constructing autonomous AI entities that can learn from their own mistakes.

This self-improving technology from the fictional world now entered the real world, aiming to entirely operate AI as humans and allowing them to make decisions on their own without human involvement. Soon the need for reinforcement learning from human feedback will vanish as AI is going to be another human who verifies the accuracy of complex writing and math queries. It is expected that by 2030 we’ll be able to benefit from this model.

Read also: What is Wardriving?

Meta has also just released an improved version of its image-identification model and tool to improve response time, as well as datasets to aid in the discovery of novel inorganic materials.

 One of the researchers, Jason Weston, said, 

“We hope, as AI becomes more and more super-human, that it will get better and better at checking its work so that it will actually be better than the average human.”

He further added,

“The idea of being self-taught and able to self-evaluate is basically crucial to the idea of getting to this sort of super-human level of AI.”

Research on the idea of reinforcement learning from AI feedback (RLAIF) has also been published by other businesses, such as Anthropic and Google. However, those businesses typically don’t make their models available to the general public, unlike Meta. 

Have you checked a new iPad mini with AI features that support Apple Pencil Pro?

For more AI, cyber security, and digital marketing insights, visit Daily Digital Grind

If you’re interested in contributing, check out our Write for Us page to submit your guest posts!