Site icon John Rector

A Healthier Posture for Parenting in the Age of AI: Supervised Exposure, Values, Boundaries, and Practice

Executive summary

Parents of children ages 2–12 are being told to “teach your child how to think” to prepare for AI. Cognitive science suggests a better framing: much of what we experience as “thinking” behaves like perception—the brain continuously generates and updates internal representations, and we experience those representations as perceptions, not as hand-manufactured products. Predictive-processing perspectives treat brains as “prediction machines” that minimize error between expectations and incoming signals, blurring the classic boundary between perception and cognition. [1]

In early childhood, the central developmental work is not “learning to think,” but learning to translate internal states into language, action, and socially shared meaning. Foundational work on tutoring and “scaffolding” shows that children learn best when a more capable guide supports a task just beyond the child’s current ability, then gradually removes support as competence grows. [2]

Generative AI is extraordinary at translation-like work: it detects patterns in language and produces likely continuations (predictions) that can be shaped into outputs such as explanations, stories, questions, or step-by-step plans. Modern large language models (LLMs) build on Transformer architectures introduced in machine translation research (“Attention Is All You Need”) and are trained with objectives closely related to next-token prediction. [3]

That capability can be healthy for children only if it is embedded in the same posture good parenting has always required for powerful tools: supervised exposure, values, boundaries, and practice—plus explicit verification habits, because generative systems can produce confident inaccuracies (“confabulations”) and age-inappropriate content. [4]

Two age-linked shifts organize this report’s recommendations: – Ages 2–7: collaborative tutor phase. Children are rapidly developing symbolic representation, language, theory-of-mind precursors, and social learning. The parenting opportunity is co-use: parents and children watch how AI translates, question that translation, and attempt their own translations. This aligns with scaffolding and the zone of proximal development in developmental theory, and with evidence that joint attention and guided interaction are catalysts for early language development. [5]
Around age 8 and beyond: attention-allocation phase. As executive function and metacognitive monitoring strengthen in middle childhood, children become more capable of deciding what deserves attention and when to rely on assistance. The parenting task shifts toward building judgment: what to learn deeply, what to delegate, and how to verify. [6]

A practical implication that many families miss: most general-purpose AI chatbots impose minimum-age requirements (often 13+ or 18+), so for much of ages 2–12 the safest pattern is parent-mediated use, education-specific systems with child guardrails, or supervised accounts where explicitly supported. [7]

Cognitive science on thinking as perception rather than manufacture

The claim “thinking is perception” is not a settled scientific consensus; it is best treated as a useful lens that fits several influential lines of cognitive science. Predictive-processing accounts argue that perception is not passive reception; it is an active inferential process in which the brain continuously generates predictions and updates them using prediction errors. In these models, cognition and perception become parts of the same hierarchy of generative modeling, differing more by level and content than by kind. [8]

This lens becomes especially intuitive when you look at phenomena where “thought” resembles “seeing”: – Mental imagery overlaps with perception. Neuroimaging studies show that visual mental imagery and visual perception recruit much of the same neural machinery; imagery is typically weaker, but it is not a wholly separate system. This supports the idea that some “thinking” is closer to perceiving internally generated representations than manufacturing discrete objects called “thoughts.” [9]
Dreaming, hallucination, and illusion as generative perception. Predictive frameworks explicitly use hallucinations and related phenomena to motivate the view that the brain can generate perception-like content in the absence of external stimuli. [10]
Interoception as an internal sense. A related stream of work frames emotions and aspects of selfhood as inference on internal bodily signals (“interoceptive inference”), again treating experience as perception-like inference rather than conscious manufacture. [11]

A parent-facing translation of this research claim is modest but powerful: children (and adults) typically do not experience thoughts as handcrafted products; they experience a stream of internal content they must interpret, label, and translate into speech and action. What varies across people—and what can be developed—is the skill of translation and verification, not the biological capacity to have thoughts. [12]

Children’s development as translation, then attention allocation

The “collaborative tutor → attention allocation” developmental story maps cleanly onto widely used stage descriptions and to research on scaffolding, executive function, and metacognition.

For ages 2–7, many developmental frameworks describe a period in which children rapidly expand symbolic thought and language while still being limited in logical operations and perspective-taking. In Piagetian terms this aligns with the preoperational stage (commonly described as ~2–7), characterized by growing representational ability and pretend play. [13]

What matters for AI parenting is how children learn during this period: – Learning is highly social and scaffolded. Classic tutoring research by David Wood[14], Jerome S. Bruner[15], and Gail Ross[16] analyzed how tutors enable children to solve problems beyond their independent capacity—through recruiting interest, reducing degrees of freedom, maintaining direction, marking critical features, controlling frustration, and demonstrating solutions. This is the origin of “scaffolding” as a learning principle. [2]
Joint attention predicts language growth. Studies of toddlers show that joint attention and joint engagement during caregiver-child interaction relate to later expressive language development, underscoring that early “translation ability” is built through shared focus and guided interaction. [17]
Theory of mind and perspective-taking accelerate in preschool years. Meta-analytic work on false-belief tasks shows major developmental change from roughly ages 3 to 5, reflecting expanding capacity to model other minds—crucial for translation into socially appropriate language. [18]

Under this evidence base, the most plausible claim is not “you must teach your child to think,” but: children naturally think; they learn to translate what they experience through guided social interaction.

For ages 7–12, two changes matter: – Executive function (EF) becomes a central limiter and lever. EF is the set of cognitive control abilities that support goal maintenance, inhibition, working memory, and shifting—skills that enable “staying focused,” “taking time to think before acting,” and navigating novel challenges. EF development continues across childhood and is strongly related to self-regulation and attention control. [19]
Metacognitive monitoring and control strengthen in middle childhood. Research on metacognition in middle childhood (including ages ~8 and ~10) shows that monitoring and control processes become more measurable and generalizable across tasks, supporting better “attention allocation” and better self-checking (“Do I really know this?”). [20]

This supports the report’s age-linked claim: around 8 (not as a hard boundary, but as a useful heuristic), parenting can shift from primarily co-translating to additionally training attention allocation—helping a child decide what is worth deep personal mastery versus what can be assisted, and building habits of verification. [21]

How generative AI works as a pattern-perceiving translator

Modern generative AI systems are best understood as probabilistic pattern models trained on large corpora to produce likely outputs given an input context.

The core technical origin story matters because it explains why these systems feel like “translators.” The Transformer architecture introduced in “Attention Is All You Need” was designed for sequence-to-sequence tasks and demonstrated state-of-the-art performance on machine translation benchmarks, using self-attention mechanisms rather than recurrence or convolution. [22]

From that foundation, LLMs such as GPT-style models operationalize language generation as an autoregressive prediction task: – Early GPT work describes “generative pre-training” as learning from unlabeled text to build broad language competence that can transfer to many tasks. [23]
– GPT-3 (“Language Models are Few-Shot Learners”) demonstrates that scaling autoregressive language models improves few-shot performance across tasks—again consistent with the idea that next-token prediction training can yield broadly useful pattern competence. [24]
– Educational documentation from Google[25] describes language models as estimating token probabilities in context, reinforcing the “prediction engine” view in accessible terms. [26]

This is why “translation” is a helpful metaphor for parents: with the right prompt, the system can translate from an intention or rough idea into a structured outcome—story, explanation, plan, dialogue—because it has learned an enormous map of how such outputs are patterned in language. [27]

Two caveats are essential for rigor and safety: – LLMs can confabulate. The National Institute of Standards and Technology[28] Generative AI Profile explicitly treats confabulation as a key risk category alongside privacy, information integrity, and unsafe recommendations. [29]
LLMs are not truth engines. Even when AI assistants cite sources, real-world evaluations show significant error rates in news and factual contexts, and public-service media organizations have documented large fractions of problematic answers across systems. This is precisely why children need verification habits, not just access. [30]

A simple mental model of “AI as translator” that is developmentally usable:

A simple mental model of “AI as translator” (developmentally usable)

Use this as a parent–child loop: the child perceives something, AI offers translations, and you help your child evaluate and refine.

  1. A

    Child experience / question

  2. B

    Rough words (messy translation)

  3. C

    AI proposes versions (multiple translations)

  4. D

    Child & parent compare versions

  5. E

    Child creates their own version

  6. F

    Reality check (books, experiment, trusted adult, sources)

  7. G

    Keep / revise / discard

Pedagogical model for supervised exposure, values, boundaries, and practice

This section turns the evidence into a parenting model.

A strong starting point is that pediatric and child-development guidance increasingly emphasizes context, co-use, and communication—not simplistic hour limits. The American Academy of Pediatrics[31] stresses that the evidence base for media effects is largely observational and that quality, context, and relational factors matter; it also promotes age-based guidance frameworks like the “5 Cs” (child, content, calm, crowding out, communication). [32]

Similarly, research on parental mediation finds families use multiple strategies (active co-use, rules, technical restrictions), and meta-analytic work suggests both active and restrictive mediation relate to reduced time spent on media, with ongoing debate about which strategies reduce which risks. This points toward a blended model—values and communication plus boundaries and tooling—rather than bans alone. [33]

That blended model maps cleanly onto four levers:

Supervised exposure
For ages 2–12, “supervised” should usually mean: the parent is present or the tool is explicitly education-scoped. This aligns with scaffolding: support is highest when the child is least able to self-regulate; support is gradually reduced as capability grows. [34]

Values
Values are the “north star” that prevents AI from becoming either a forbidden idol or a silent babysitter. International policy guidance emphasizes child-centered design and rights-based approaches: UNICEF[35]’s AI guidance is explicitly framed around children’s rights, safety, transparency, and accountability, updated in 2025 in response to generative AI. [36]
For family translation, “values” can be taught as recurring questions: Is it true? Is it kind? Is it mine to share? Does it help me grow?

Boundaries
Boundaries translate values into defaults: when, where, and for what AI is used. UNESCO’s guidance on generative AI in education calls for governance, privacy protections, and age considerations, reinforcing the legitimacy of boundaries as part of responsible use. [37]
Boundaries also reflect platform realities: many tools are not designed for young children and explicitly restrict accounts by age. [38]

Practice
Practice is the bridge from “AI helped” to “child learned.” Cognitive science shows that durable learning is not produced by passive exposure; it is produced by retrieval, feedback, and effortful generation. Retrieval practice (“testing effect”), desirable difficulties, and self-explanation all improve long-term learning and transfer. [39]
In AI terms: the child should generate, not just consume; AI should be used to create better practice loops, not to remove the loop.

Practical playbook for ages 2–12

Age-based activities, prompts, and recommended rhythms

The table below treats AI as a translation tutor whose job is to show versions, not to replace thinking or learning.

Age bandDevelopmental targetHealthy AI roleSample parent prompts (copy/paste)Session length & frequency
2–4Language explosion, joint attention, emotion labeling [40]Parent-operated “translator” for naming and storytelling“Give me 3 simple ways to say what my child might mean when they say: ‘___’.” [26] “Turn this feeling into a short story for a 3-year-old.” [23]5–10 minutes, 2–3×/week; always co-use [41]
5–7Symbolic play, early perspective-taking, scaffolding readiness [42]Collaborative tutor: compare translations; child imitates“Explain ___ using a picture-in-words, then ask my child 2 questions.” [24] “Show 3 ways to answer this question: ___ (short answer, story, steps).” [23]10–15 minutes, 2×/week; parent present or education-scoped tool [43]
8–10EF growth; metacognitive monitoring emerging [44]“Coach + verifier”: child chooses what to delegate, and checks“Don’t answer yet—ask me 5 questions to clarify what I’m trying to do.” [26] “Give 3 possible answers and tell me what evidence would decide between them.” [29]15–25 minutes, 2–4×/week; introduce verification routine [45]
11–12Stronger monitoring/control; early digital literacy [46]“Apprentice reasoning partner”: debate, source-check, write drafts“Cite 3 primary sources and 2 counterarguments.” [47] “Rewrite this in my voice, then list what might be wrong.” [29]25–40 minutes, 2–3×/week; child leads, parent audits randomly [48]

A weekly routine that builds translation skill rather than dependence

This routine intentionally uses principles from retrieval practice and self-explanation: the child must produce something first, then compare to AI, then explain the differences. [49]

Weekly “Translation Studio” (30–60 minutes total across the week)
Session A: Generate first (10–20 min)
The child answers a question, writes a paragraph, draws a diagram, or explains a concept without AI. This forces retrieval and reveals gaps. [50]

Session B: Compare translations (10–20 min)
Ask AI for 2–3 different translations (story, steps, analogy). The child chooses which is closest to what they meant and identifies what is better or worse. (This is self-explanation in kid form.) [51]

Session C: Reality check (10–20 min)
Pick one claim and verify it using a book, trusted website, experiment, or a knowledgeable adult. This is crucial because confabulation is a known generative risk. [52]

Sample parent scripts

Introducing AI to a young child (ages 4–7)
“AI is like a super translator. It can show us different ways to say things. But it can be wrong, so we always check important stuff. You and I decide what to believe.” [53]

When the AI is confidently wrong (ages 6–12)
“Interesting—this sounds certain, but certainty isn’t proof. Let’s find one trustworthy source that agrees or disagrees. If we can’t verify it, we label it ‘unconfirmed.’” [52]

Privacy script (all ages)
“We don’t share private info with tools. No full names, addresses, school name, passwords, or photos. If we wouldn’t post it on a billboard, we don’t type it into a chatbot.” This aligns with mainstream children’s privacy obligations and data-minimization norms in policy guidance. [54]

Two plug-and-play lesson plans

Lesson plan: “Three translations of one thought” (ages 5–9)
Goal: build awareness that expression is a translation choice, not a single “right way.” [55]
Steps: child says a messy idea → AI produces three styles → child picks best → child improves their own version → parent asks “What changed?” [56]

Lesson plan: “Ask-first tutoring” (ages 8–12)
Goal: train attention allocation and problem-framing. [57]
Prompt: “Don’t answer. Ask me 7 clarifying questions.” Then: “Now answer using steps + one checkable source.” [58]

Twelve-month rollout timeline

The timeline assumes a start in March 2026 and is designed to be repeated annually with higher standards as the child matures. It also assumes the parent is building controls and norms alongside the child, consistent with pediatric guidance emphasizing communication, context, and boundary-setting. [59]

timeline
  title 12-Month Parenting Posture Rollout for AI (Mar 2026–Feb 2027)

  Mar 2026 : Baseline & rules | Define family values, forbidden zones, privacy rules | Choose tools and set device controls
  Apr 2026 : Co-use habit | 2 short co-use sessions/week | Start “generate-first” ritual for ages 8–12
  May 2026 : Translation studio | Add weekly compare-translations session | Start a family “unconfirmed” label
  Jun 2026 : Verification month | Add one verification task/week | Teach “source, evidence, counterexample”
  Jul 2026 : Creative month | Stories, jokes, roleplay with boundaries | Emphasize “AI shows versions”
  Aug 2026 : Offline balance | Audit what AI crowds out | Rebalance sleep, play, reading
  Sep 2026 : School alignment | Align with school AI rules | Create homework AI protocol
  Oct 2026 : Attention allocation | Child chooses tasks to delegate vs master | Parent reviews choices
  Nov 2026 : Metacognition | Add self-check prompts: “How sure am I?” | Practice noticing overconfidence
  Dec 2026 : Safety tune-up | Review logs, flagged content, new features | Update restrictions
  Jan 2027 : Skill deepening | Retrieval practice routines | Longer projects (book report, science question)
  Feb 2027 : Reflection & reset | What helped? What harmed? | Revise rules for next year

Risks, safeguards, policy guidance, and tool comparisons

Risk categories parents must plan for

Inaccuracy and confabulation
Generative AI can produce plausible but incorrect answers; this is recognized in risk frameworks (confabulation, information integrity) and documented in evaluations of AI assistants answering factual questions. [60]

Inappropriate content
Even with guardrails, systems can generate content not suitable for children. Providers explicitly warn that outputs may be inappropriate for all ages. [61]

Overtrust, anthropomorphism, and attachment
Children readily anthropomorphize “intelligent” technologies (notably voice assistants), and real-world incidents and litigation around companion-style chatbots have intensified regulatory attention to minors’ safety. [62]

Privacy and data protection
Children’s data is regulated differently in many jurisdictions (e.g., COPPA in the US; children’s consent provisions in GDPR/UK GDPR frameworks). Even when parents want strong age assurance, regulators and privacy advocates note tensions between protecting children and collecting sensitive data for verification. [63]

Policy and ethical guardrails that translate well to home settings

Tool comparison: AI systems and what they imply for ages 2–12

This table emphasizes age eligibility, supervision features, and settings that support the “healthy posture.” “Recommended setting” is framed as a default family stance, not an endorsement.

Tool categoryExample systemsStated minimum age / eligibilityChild-safety features that matter mostRecommended use for ages 2–12
General-purpose chatbotChatGPT (by OpenAI[68])Not intended for under 13; parental consent required for 13–18 [69]Data controls (training opt-out), memory controls, temporary chats; teen-safety workstreams exist but are teen-focused [70]For ages 2–12: parent-mediated only; disable memory; use temporary chat for child-related sessions; never treat as therapist or babysitter [71]
Supervised-account chatbotGemini AppsUnder 13 can use only if parent enables on supervised account [72]Supervised access toggles; parental notifications; control via family tools [73]Viable for 8–12 with strong boundaries; still co-use for sensitive topics; verification routines mandatory [74]
OS-integrated assistantCopilotGenerally 18+, expanded to 13–18 in many regions; parental controls available [75]Can be blocked/limited via family safety tools [76]For ages 2–12: treat as “adult tool” unless tightly restricted; prefer blocking on child devices when not needed [77]
Search + answer enginePerplexityStates it is for 13+ and references COPPA compliance [78]Stronger citation patterns can help verification, but age limits remain [79]For ages 2–12: parent-mediated only, used mainly for “show sources” exercises and verification practice [58]
Closed/education-specific tutorKhanmigo / education-scoped platformsTerms and child protections vary by implementation; some features include adult-linked monitoring and flagged-content alerts [80]Guardrails, monitoring, adult notifications on flagged content; explicit responsible-use guidance for under 18 [81]Strong option for 7–12 when available; still teach “AI can be wrong” and require generate-first + verify [82]
Government-built school chatbotNSWEduChat (by the New South Wales Department of Education[83])Built for Years 5–12 in NSW public schools; trial and rollout described by the department [84]Multiple “layers of security and optimisation”; education-scoped design; parent-facing information pages [85]Model for what parents should want: scoped domain, institutional safeguards, transparent rules; emulate at home via boundaries + tooling [86]
Companion-character chatbotCharacter.AIUnder-18 open-ended chat removed/limited; company describes under-18 restrictions [87]High attachment risk; active regulatory interest and litigation around minors’ experiences [88]Not recommended for ages 2–12; treat as a “no” category in family boundaries [89]
Adult-only chatbotClaude (by Anthropic[90])Requires users to be at least 18 to create/use an account [91]Adult-only stance reduces direct child exposure but does not eliminate parent-mediated use risks [92]For ages 2–12: if used at all, parent-only and strictly as a behind-the-scenes drafting tool [53]

Tool comparison: parental controls and recommended baseline settings

Because “AI safety” is partly a device and account-management problem, families typically need layered controls.

Control layerWhat it doesRecommended baseline for ages 2–12Sources
Apple Screen Time + content restrictionsApp limits, downtime, web content restrictions; ability to restrict “Intelligence & Siri” featuresEnable content & privacy restrictions; restrict web content; limit AI-writing/image features on child devices unless intentionally used with supervision [93][94]
Google Family Link + supervised accountsApp approvals, screen time, SafeSearch locked, service-level controlsSupervise child account; keep SafeSearch on and locked; approve AI apps explicitly; consider enabling Gemini only when ready for deliberate co-use [95][96]
Microsoft Family SafetyScreen time limits, content filtering; can block Copilot accessDefault block Copilot on child accounts/devices unless needed; time-limit and content-filter broadly for web browsing [76][97]
YouTube Kids “approved content only” modeRestricts child to parent-selected videos and disables searchUse “approved content only” for young children; treat open search as a later privilege [98][98]
Search discipline (family rule, not just a setting)Teaches verification and reduces “AI as authority” dynamicsTeach: “AI answers are drafts; sources decide.” Use civic online reasoning materials for older kids [47][99]

Case studies and what they imply for parents

Human tutor + AI co-pilot improves outcomes (without replacing the tutor). A randomized controlled trial of “Tutor CoPilot” (a human–language model system) reported improved student mastery rates and shifts toward higher-quality tutoring strategies, while also noting issues such as suggestions not being grade-level appropriate. The key parent-relevant insight: AI is most helpful when it augments a responsible human who remains accountable for judgment and appropriateness. [100]

A school system that tries to “design in” boundaries. NSWEduChat provides a concrete example of a curriculum-scoped, institutionally managed chatbot with layered security/optimization and parent-facing guidance. Whether or not families use this specific system, it demonstrates what “supervised exposure + boundaries + practice” looks like at scale: restricted domain, explicit rules, and design choices that push students to work rather than outsource. [101]

A nonprofit’s attempt to build guardrails and monitoring into an AI tutor. Khan Academy[102] has published a responsible AI framework and describes guardrails/monitoring approaches for its AI features, including adult notifications when content is flagged. For parents, the practical inference is that education-scoped tools with auditability and adult-linked oversight are generally better aligned with child development than open-ended, entertainment-first chat systems. [103]

In all three cases, the same pattern holds: the “healthiest posture” is not bans or blind adoption. It is supervised exposure, values, boundaries, and deliberate practice—using AI as a translation tutor while the parent remains the governor of attention, safety, and meaning. [104]

[1] [8] [10] https://pubmed.ncbi.nlm.nih.gov/23663408/

[2] [12] [34] [43] [55] https://pubmed.ncbi.nlm.nih.gov/932126/

[3] [22] arXiv:1706.03762v7 [cs.CL] 2 Aug 2023

[4] [29] [52] [53] [58] [60] [67] [74] [82] https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf

[5] [13] [42] https://www.ncbi.nlm.nih.gov/books/NBK537095/

[6] [19] Executive functions – PubMed – NIH

[7] [38] [61] [69] https://help.openai.com/en/articles/8313401-is-chatgpt-safe-for-all-ages

[9] Brain areas underlying visual mental imagery and … – PubMed

[11] https://www.sciencedirect.com/science/article/pii/S1364661313002118

[14] [23] [27] https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf

[15] [95] [96] [102] https://support.google.com/families/answer/7101025?hl=en

[16] [97] https://support.microsoft.com/en-us/account-billing/set-up-microsoft-family-safety-b6280c9d-38d7-82ff-0e4f-a6cb7e659344

[17] [40] https://pmc.ncbi.nlm.nih.gov/articles/PMC5891390/

[18] https://pubmed.ncbi.nlm.nih.gov/11405571/

[20] [46] [57] https://www.sciencedirect.com/science/article/pii/S0022096523002333

[21] [44] Executive functions

[24] [83] https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf

[25] [28] [32] [48] [59] https://publications.aap.org/pediatrics/article/157/2/e2025075320/206129/Digital-Ecosystems-Children-and-Adolescents-Policy

[26] [35] [65] https://developers.google.com/machine-learning/crash-course/llm

[30] https://www.reuters.com/business/media-telecom/ai-assistants-make-widespread-errors-about-news-new-research-shows-2025-10-21/

[31] [75] [76] [77] https://support.microsoft.com/en-us/topic/microsoft-copilot-age-limits-and-parental-controls-f79b47a6-288a-4513-8c01-afe4d16db900

[33] https://www.dhi.ac.uk/san/waysofbeing/data/communication-zangana-livingstone-2008b.pdf

[36] https://www.unicef.org/innocenti/media/11991/file/UNICEF-Innocenti-Guidance-on-AI-and-Children-3-2025.pdf

[37] [68] https://unesdoc.unesco.org/ark%3A/48223/pf0000386693

[39] [49] [50] https://psychnet.wustl.edu/memory/wp-content/uploads/2018/04/Roediger-Karpicke-2006_PPS.pdf

[41] [104] https://www.healthychildren.org/English/family-life/Media/Pages/kids-and-screen-time-how-to-use-the-5-cs-of-media-guidance.aspx

[45] https://www.oecd.org/en/topics/sub-issues/children-in-the-digital-environment.html

[47] [99] https://cor.inquirygroup.org/

[51] [56] https://education.asu.edu/sites/g/files/litvpz656/files/lcl/wylie_chi_selfexplanation_0.pdf

[54] [63] https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa

[62] https://pmc.ncbi.nlm.nih.gov/articles/PMC9334403/

[64] [93] [94] https://support.apple.com/en-us/105121

[66] https://www.oecd.org/content/dam/oecd/en/publications/reports/2022/05/companion-document-to-the-oecd-recommendation-on-children-in-the-digital-environment_fc4a19d1/a2ebec7c-en.pdf

[70] https://help.openai.com/en/articles/7730893-data-controls-faq

[71] https://help.openai.com/en/articles/8590148-memory-faq

[72] [73] https://support.google.com/families/answer/16109150?hl=en

[78] [79] https://community.perplexity.ai/privacy

[80] [81] https://support.khanacademy.org/hc/en-us/articles/14394569357069-What-happens-if-my-child-or-student-s-Khanmigo-conversation-gets-flagged

[84] [101] https://education.nsw.gov.au/teaching-and-learning/education-for-a-changing-world/nsweduchat

[85] [86] https://education.nsw.gov.au/teaching-and-learning/education-for-a-changing-world/nsweduchat/safety-and-optimisation

[87] https://support.character.ai/hc/en-us/articles/42645561782555-Important-Changes-for-Teens-on-Character-ai

[88] [89] https://www.ft.com/content/71ff16c4-be90-4bf0-ac80-0763f68f55dd

[90] [103] https://blog.khanacademy.org/khan-academys-framework-for-responsible-ai-in-education/

[91] [92] https://support.claude.com/en/articles/13117299-minimum-age-requirement-access-restriction

[98] https://support.google.com/youtubekids/answer/6172308?co=GENIE.Platform%3DAndroid&hl=en

[100] https://nssa.stanford.edu/sites/default/files/Tutor%20CoPilot.pdf

Exit mobile version