Meta has introduced a groundbreaking artificial intelligence model called Meta Motivo, aimed at elevating the Metaverse experience by enhancing the realism of digital avatars. This innovation is part of Meta’s ongoing efforts to establish dominance in the AI and augmented reality spaces, with the company allocating between $37 billion and $40 billion in capital expenditures for 2024.
The Meta Motivo model tackles long-standing issues with digital avatar movement, enabling more lifelike and human-like gestures. This advancement could pave the way for more realistic non-playable characters (NPCs), a democratization of character animation, and immersive digital interactions.
Meta is also introducing the Large Concept Model (LCM), a novel approach to language modeling. Unlike traditional large language models (LLMs) that predict the next word, the LCM focuses on predicting high-level concepts, represented as full sentences across multiple languages and mediums. Meta describes this as a major step toward decoupling reasoning from language representation.
Beyond these innovations, Meta continues to release its AI tools for public use. One such tool is Video Seal, which embeds a hidden, traceable watermark in videos to combat unauthorized use. This aligns with Meta’s philosophy that open access to AI models can foster collaboration and drive technological progress.
In a statement, Meta emphasized the broader implications of its research:
“We believe this work will lead to the development of fully embodied agents in the Metaverse, enabling lifelike NPCs and new immersive experiences.”
Meta’s substantial investment in AI, coupled with its open-source initiatives, reflects its ambition to shape the future of digital interactions.
For the full report, visit the original article on https://www.reuters.com/technology/artificial-intelligence/meta-releases-ai-model-enhance-metaverse-experience-2024-12-13/