Show HN: FaceTime-style calls with an AI Companion (Live2D and long-term memory)
2 days ago (thebeni.ai)
Hi HN, I built Beni (https://thebeni.ai ), a web app for real-time video calls with an AI companion.
The idea started as a pretty simple question: text chatbots are everywhere, but they rarely feel present. I wanted something closer to a call, where the character actually reacts in real time (voice, timing, expressions), not just “type, wait, reply”.
Beni is basically:
A Live2D avatar that animates during the call (expressions + motion driven by the conversation)
Real-time voice conversation (streaming response, not “wait 10 seconds then speak”)
Long-term memory so the character can keep context across sessions
The hardest part wasn’t generating text, it was making the whole loop feel synchronized: mic input, model response, TTS audio, and Live2D animation all need to line up or it feels broken immediately. I ended up spending more time on state management, latency and buffering than on prompts.
Some implementation details (happy to share more if anyone’s curious):
Browser-based real-time calling, with audio streaming and client-side playback control
Live2D rendering on the front end, with animation hooks tied to speech / state
A memory layer that stores lightweight user facts/preferences and conversation summaries to keep continuity
Current limitation: sign-in is required today (to persist memory and prevent abuse). I’m adding a guest mode soon for faster try-out and working on mobile view now.
What I’d love feedback on:
Does the “real-time call” loop feel responsive enough, or still too laggy?
Any ideas for better lip sync / expression timing on 2D/3D avatars in the browser?
Thanks, and I’ll be around in the comments.
Building on zemo's point about parasocial relationships: traditional parasocial interaction involves a performer who doesn't know you exist. Here the AI does respond to you specifically, which changes the dynamic.
Is it still parasocial if the other party is responsive but not conscious? Or is this something new that we don't have good language for yet?
I think “parasocial” still captures part of it (one-to-many distribution, performer vibe), but there’s also a true interactive dyad here. It’s closer to “synthetic social interaction” or “responsive parasocial.” I don’t have a perfect word yet, but the asymmetry and the responsiveness both matter.
You need to first prove that AI is not conscious.
I find it hard to even convince others that I am a conscious person.
Maybe consciousness is just a matter of belief, if I see this AI and believe that it's a person, then I am talking to a conscious entity.
I’m not trying to make any claims about consciousness. For us, the practical question is: does the interaction feel supportive and useful, while staying transparent that it’s a model. The rest is philosophy, and I’m happy to read more perspectives.
Give it access to a terminal and see what it does, unprompted. Does it explore? Does it develop interests? Does it change when exposed to new information?
2 replies →
[flagged]
1 reply →
I think maybe there needs to be a new word. It's still an asymmetric relationship. It's kind of a mix of DMing an influencer and chatting with the barista because you think she actually likes you. You're talking to a mirage.
For better lip sync you could try using rhubarb to extract from the mp3. What is your backend speech processor so you can get the real-time streaming response? Rhubarb would add a bit of latency for sure.
For real-time: we use WebRTC for streaming. Input is streaming STT, then a low-latency LLM, then TTS, then we drive Live2D parameters on the client. Lip sync: we currently do (simple phoneme / amplitude-based) and are testing viseme extraction. Rhubarb is on our list, but we’re cautious about added latency.
This is disturbing.
It will quickly distill down to clients using the service just for sex and sex-adjacent activities.
No kink-shaming, but this sort of thing enables self-destructive hard-to-return-from anti-social behaviour.
Totally fair reaction. We’re building this with clear boundaries: we don’t position it as therapy replacement, we add safety rails, and gives user a choice what mode they want and guardrails differ based on this. Plus, age restriction is there as safety boundary
wow we got personal vtubers now!
yess you can have her 24/7!
What are you using for tts/stt/models?
realtime api + elevenlabs but llms will be diversified based on persona moving forward. Using chatgpt/gemini as baseline model, we feel prompting has limitation
This is cool.
Appreciate it. If you try it and anything feels off (latency, turn-taking, uncanny moments), I’d love concrete feedback. That’s what we’re grinding on right now.
Where’s the asteroid at
Same place as my latency budget: disappearing fast.
It creates a conflict to build a system that is both a private friend and a public performer. You cannot maximize intimacy and fame at the same time.
100% agree. Maximizing intimacy and scaling distribution pull in opposite directions. We’re experimenting with keeping the “character” consistent while letting personalization live in private memory and user-controlled settings. Still early, and this tension is real.
You're describing Parasocial interaction: https://en.wikipedia.org/wiki/Parasocial_interaction
far from being impossible, it's the entire influencer economy. This form of social media has been extremely widespread for a decade or so running; it's probably the dominant form of social media.