New Delhi: Synthetic actors used to be a niche VFX flex. In 2026, they are creeping into everyday entertainment, from ads to music videos to livestreams. The big shift is not just visuals. It is how fast these characters can be made, and how easily they can “perform” without a full studio setup.
We have friends in editing rooms who joke that half the “talent” work now starts on a laptop and a phone camera. That joke lands since the tools have started looking good enough for real work, and the cost barriers are dropping for small teams too.
From MetaHumans to avatar actors, the new casting pipeline
Unreal Engine’s MetaHuman ecosystem pushed digital humans closer to mainstream creator workflows. MetaHuman Animator, for example, can capture an actor’s facial performance using an iPhone or a stereo head-mounted camera and apply it to a MetaHuman character, skipping a lot of manual cleanup work.
That matters for casting and production since it changes the starting point. You do not need a full capture stage to begin testing a performance. A small team can block a scene quickly, try different “actors” as avatars, and refine the look later.
This is where synthetic actors start feeling less like a sci-fi headline and more like a production choice.
Performance capture without bulky rigs is getting normal
Markerless motion capture is one of the biggest drivers here. Move.ai markets tools that can pull motion capture from video, including single-camera captures, without suits and markers. That is a big deal for indie creators and fast turnaround work.
Then there is the “drop a CG character into live action footage” side of it. Wonder Dynamics built Wonder Studio around the idea that creators can add and animate CG characters into footage using AI-powered analysis, and Autodesk later acquired the company.
Put these pieces together and you get a new kind of pipeline.
- a phone shoots the face capture
- regular cameras shoot the body motion
- AI tools help map performance onto a digital character
- edits happen in post without calling everyone back to set
For brands and studios, this also helps with localisation, reshoots, and quick alternates. For creators, it is a way to try a lot of ideas fast, even if the final shot still needs polish.
The influencer side is exploding, and it is not subtle anymore
Synthetic influencers are also rising fast, and they work like always-on personalities. They post, reply, “stream”, and build fandom without needing travel, rest, or a human schedule. The creator tools keep getting easier, too. One line in the research material sums up the vibe, “New AI tools have made it easier than ever to create virtual influencers… With just a few steps, anyone can design a character, give it a unique voice and style, and launch it as a full-fledged Instagram personality.”
This is where things get messy for trust. A lot of people still assume the face on screen is a person with a real life behind it. With synthetic influencers, that assumption breaks. Platforms and labels are trying to catch up, and viewers have to stay sharp.
The big questions: rights, consent, and who gets paid
Once a synthetic actor looks human, the hard part becomes human too. Who owns the face. Who owns the voice. Who approved the training data. Who gets paid when that “talent” works 24×7.
The debate around AI performers like Tilly Norwood shows how fast this can turn into an industry fight, with unions and creators questioning what is fair and what is stolen.
Where TV9’s AI² Awards 2026 fits into this story
This is also why initiatives like TV9 Network’s AI² Awards 2026 land at an interesting time. The competition is positioning itself around tech-shaped storytelling, asking students, indie creators, and early-career filmmakers to experiment with AI tools across formats like documentaries, animation, music videos, and branded content.
If synthetic actors and AI influencers are going to be part of the next decade of screens, the real test will be how creators use them without flattening the human side of stories.