← Back to context

Comment by raphman

7 days ago

Nice user experience (though I'd hate waiting so long for the first response).

One of my major fears about always-available AI agents is that they will (and already do) infiltrate and disrupt communication in families. In a dystopian timeline, kids won't learn life skills and attitudes from their parents but from the AI nanny made by an American megacorp. Why should kids trust their parents on anything if the AI nanny provides faster and more well researched answers? No need to ask Dad for advice anymore. No need to ask Mom why your friend with dark skin hasn't come to school lately - AI nanny will provide a step-by-step learning experience for you.

That's definitely not the only possible outcome, and each family will have a different experience. Not everyone has parents that give good advice. However, I can't imagine such a tool not having wide-spread effects on intra-family communication and relationships.

Is this a concern to you? If so, how would one mitigate negative effects? Is it the responsibility of software developers to avoid mis-use of such systems?

It is a concern I have, but it's a concern with every new technology. The user, or in this case the family are the ones that need to decide how they will wield the technology.

The way I think about this tool is that it is way to spark and test our curiosity. Is the child really interested in space or just randomly asking questions. Tools like this can help you as parent determine this faster.

For family tools like these I would want parents to know everything the child is asking about.