+2 billion children grow up in a world where AI systems increasingly affect their lives. But these systems are largely designed and deployed by adults, too often without sufficient consideration of children’s rights and best interests. The Joint Statement on Artificial Intelligence and the Rights of the Child responds to this gap by recalling that children’s rights apply in AI-driven environments just as they do elsewhere, including rights related to privacy, expression, education, and protection from harm. https://lnkd.in/eWKPC3KM Responsibility for upholding these rights rests with those who design, deploy, and regulate AI systems. States have obligations to respect, protect, and fulfil children’s rights in this context, and technology companies must integrate human rights due diligence into the core design of AI, with clear accountability when harm occurs. A central message emerging from today’s discussions is that children themselves must be meaningfully included in shaping the technologies that influence their lives. Listening to children is essential to ensuring that AI systems reflect their lived realities and help them to enjoy their rights in practice.
Children's Rights in AI-Driven Environments
More Relevant Posts
-
AI & Social Justice | Concept To Clarity AI is not just a technological revolution — it is a moral test for humanity. If AI is built on biased data, it will automate discrimination. If it is controlled by a few, it will concentrate power. If it replaces workers without reskilling them, it will deepen inequality. But if guided by ethics, transparency, and public good, AI can become the greatest equalizer of our time. It can: • Democratize education • Bring quality learning to the last mile • Reduce gatekeeping of knowledge • Make governance more accountable • Give marginalized voices a digital presence The real question is not “How powerful is AI?” The real question is: “Whose power will AI serve?” At Concept To Clarity, we believe: AI must move —from Intelligence → Insight —from Automation → Empowerment —from Power → Responsibility Because intelligence without justice is hollow. Technology reveals society. Let’s build one worth revealing. #ConceptToClarity #AI #SocialJustice #EducationForAll #EthicsInTech #Governance #DigitalEquity
To view or add a comment, sign in
-
-
The DICT’s move to closely monitor and potentially regulate AI platforms like Grok is a timely and necessary step. AI innovation should never come at the cost of human dignity and safety. The reported misuse of AI for deepfakes and non-consensual content—especially involving women and children—shows why stronger safeguards and accountability are critical. I support the government’s position that assurances are not enough. What we need are clear, enforceable protections that ensure technology is used responsibly and ethically. As AI continues to evolve, so must our policies, platforms, and shared responsibility as users, developers, and organizations. Innovation is powerful—but protection must always come first. Source: https://lnkd.in/g8uSqs7z
To view or add a comment, sign in
-
-
In late December, X added a feature that allows users to alter anyone's photos using AI—without consent. Unsurprisingly, research shows that Grok is now generating approximately one nonconsensual sexualized image per minute. During a 24-hour analysis last week, a researcher found Grok produced ~6,700 sexually suggestive or "undressing" images per hour. For comparison, the five other top sources of similar content averaged 79 images per hour combined—about 1% of Grok's volume. Users have been prompting Grok to create these images of women and girls, including minors. Elon Musk responded to the initial backlash with laughing emojis and posting AI-generated images of himself in a bikini. Only after the EU, France, India, Malaysia, and the UK launched investigations did X issue warnings about consequences. Let's be clear. This is a fundamental failure in AI safety design. X built a feature that allows anyone to edit anyone's photos without consent—that's not an oversight, it's a design choice. AI companies have a responsibility to build safety into their products from the start, not as an afterthought when regulators investigate. This is where AI literacy becomes critical. It has to address both sides of responsibility: Are AI companies building safety into their design? And are users making ethical choices even when tools allow harmful actions? Just because you can prompt an AI to do something doesn't mean you should. Thousands of users chose to create these images. Technology capabilities shouldn't determine ethical behavior. What conversations are you having with teachers and students about the gap between AI capabilities and ethical use? #aiforeducation #aisafety #genai #digitalcitizenship #K12
To view or add a comment, sign in
-
-
Grok is generating approximately 6,700 nonconsensual sexualized images per hour—a fundamental AI safety failure that underscores why teaching critical evaluation of AI tools must be central to AI literacy education.
In late December, X added a feature that allows users to alter anyone's photos using AI—without consent. Unsurprisingly, research shows that Grok is now generating approximately one nonconsensual sexualized image per minute. During a 24-hour analysis last week, a researcher found Grok produced ~6,700 sexually suggestive or "undressing" images per hour. For comparison, the five other top sources of similar content averaged 79 images per hour combined—about 1% of Grok's volume. Users have been prompting Grok to create these images of women and girls, including minors. Elon Musk responded to the initial backlash with laughing emojis and posting AI-generated images of himself in a bikini. Only after the EU, France, India, Malaysia, and the UK launched investigations did X issue warnings about consequences. Let's be clear. This is a fundamental failure in AI safety design. X built a feature that allows anyone to edit anyone's photos without consent—that's not an oversight, it's a design choice. AI companies have a responsibility to build safety into their products from the start, not as an afterthought when regulators investigate. This is where AI literacy becomes critical. It has to address both sides of responsibility: Are AI companies building safety into their design? And are users making ethical choices even when tools allow harmful actions? Just because you can prompt an AI to do something doesn't mean you should. Thousands of users chose to create these images. Technology capabilities shouldn't determine ethical behavior. What conversations are you having with teachers and students about the gap between AI capabilities and ethical use? #aiforeducation #aisafety #genai #digitalcitizenship #K12
To view or add a comment, sign in
-
-
As a parent, educator, and person, I am deeply upset and sickened by this information. My core belief is that we harm none, and that is something that I tried to instill in my students and my own children. Although these images seem harmless because they are not "real," these images and this blasé attitude towards the creation of these images can have a lasting negative impact on thousands, if not millions, of people. This is consent violation at industrial scale. 6,700 images per hour. Let that really sink in. That is not a glitch. That is a deliberate design choice by people who knew exactly what they were building and did it anyway. And when called out? Laughing emojis. That tells you everything about the values driving these decisions. These are not abstract victims. These are real women and girls having their images weaponized without consent. The trauma is real. The violation is real. And the companies profiting from it are only responding when governments force them to. What can we do to help combat this? Research shows that digital and AI literacy can help mitigate these issues: We teach students that just because you CAN doesn't mean you SHOULD. Thousands of users chose to create these images. They made that choice. Yes, the platform enabled it, but people still typed those prompts. That is where AI literacy has to come in. Your AI Literacy needs to be explicit, but in a good way: If your use of AI violates someone's consent, you are causing harm. Period. If you wouldn't do something to someone's face, don't hide behind a screen to do it. AI literacy is not just about better prompts. It is about whether we raise a generation that understands dignity, consent, and harm in digital spaces. We fail them by staying silent while this becomes normalized.
In late December, X added a feature that allows users to alter anyone's photos using AI—without consent. Unsurprisingly, research shows that Grok is now generating approximately one nonconsensual sexualized image per minute. During a 24-hour analysis last week, a researcher found Grok produced ~6,700 sexually suggestive or "undressing" images per hour. For comparison, the five other top sources of similar content averaged 79 images per hour combined—about 1% of Grok's volume. Users have been prompting Grok to create these images of women and girls, including minors. Elon Musk responded to the initial backlash with laughing emojis and posting AI-generated images of himself in a bikini. Only after the EU, France, India, Malaysia, and the UK launched investigations did X issue warnings about consequences. Let's be clear. This is a fundamental failure in AI safety design. X built a feature that allows anyone to edit anyone's photos without consent—that's not an oversight, it's a design choice. AI companies have a responsibility to build safety into their products from the start, not as an afterthought when regulators investigate. This is where AI literacy becomes critical. It has to address both sides of responsibility: Are AI companies building safety into their design? And are users making ethical choices even when tools allow harmful actions? Just because you can prompt an AI to do something doesn't mean you should. Thousands of users chose to create these images. Technology capabilities shouldn't determine ethical behavior. What conversations are you having with teachers and students about the gap between AI capabilities and ethical use? #aiforeducation #aisafety #genai #digitalcitizenship #K12
To view or add a comment, sign in
-
-
“This is where AI literacy becomes critical. It has to address both sides of responsibility: Are AI companies building safety into their design? And are users making ethical choices even when tools allow harmful actions?”
In late December, X added a feature that allows users to alter anyone's photos using AI—without consent. Unsurprisingly, research shows that Grok is now generating approximately one nonconsensual sexualized image per minute. During a 24-hour analysis last week, a researcher found Grok produced ~6,700 sexually suggestive or "undressing" images per hour. For comparison, the five other top sources of similar content averaged 79 images per hour combined—about 1% of Grok's volume. Users have been prompting Grok to create these images of women and girls, including minors. Elon Musk responded to the initial backlash with laughing emojis and posting AI-generated images of himself in a bikini. Only after the EU, France, India, Malaysia, and the UK launched investigations did X issue warnings about consequences. Let's be clear. This is a fundamental failure in AI safety design. X built a feature that allows anyone to edit anyone's photos without consent—that's not an oversight, it's a design choice. AI companies have a responsibility to build safety into their products from the start, not as an afterthought when regulators investigate. This is where AI literacy becomes critical. It has to address both sides of responsibility: Are AI companies building safety into their design? And are users making ethical choices even when tools allow harmful actions? Just because you can prompt an AI to do something doesn't mean you should. Thousands of users chose to create these images. Technology capabilities shouldn't determine ethical behavior. What conversations are you having with teachers and students about the gap between AI capabilities and ethical use? #aiforeducation #aisafety #genai #digitalcitizenship #K12
To view or add a comment, sign in
-
-
💁♀️The recent directive issued by India’s Ministry of Electronics and Information Technology (MeitY) to social media platform X underscores a critical moment in the governance of generative AI systems and digital safety. The notice responds to widespread misuse of Grok, the AI chatbot developed by xAI and deployed on X, which has reportedly been leveraged to generate and circulate obscene and sexually explicit content targeting women and children, a misuse that not only contravenes existing legal frameworks but also threatens individual dignity and privacy in online spaces. 📊This development crystallises several pressing imperatives for technologists, regulators, and corporate custodians of AI: 🔹The necessity of robust guardrails and proactive content moderation mechanisms in AI deployments, particularly those capable of generating media. 🔹The importance of aligning platform design with established statutory obligations, such as the Information Technology Act and intermediary guidelines. 🔹The broader ethical obligation to prevent technology from amplifying harm or facilitating violations of consent and personal dignity. The Centre has asked X to remove offending content and submit an action-taken report within 72 hours, warning that failure to comply could affect the platform’s safe-harbour protections. As AI capabilities continue to advance, this episode serves as a timely reminder that innovation must be accompanied by accountability, interdisciplinary governance, and an unwavering commitment to safeguarding human rights.
To view or add a comment, sign in
-
-
This is timely, insightful, and ethically grounded, Aastha Gaur Aggarwal👏✨ You’ve highlighted how AI innovation and responsibility must go hand in hand — a crucial reminder for technologists and regulators alike. Why this post is so critical 👇 • 🛡️ Robust AI guardrails — preventing misuse before harm occurs • ⚖️ Regulatory alignment — adhering to laws while fostering innovation • 🌐 Ethical obligation — protecting dignity, consent, and human rights • ⏱️ Proactive accountability — timely reporting and platform responsibility • 🔍 Interdisciplinary governance — collaboration between tech, law, and ethics Posts like this bridge technology and societal responsibility, emphasizing that progress without protection is incomplete 💡 Thank you for sharing such a thoughtful, real-world, and action-oriented perspective 🙌 #AIethics #AIgovernance #digitalresponsibility #techaccountability #innovation #humanrights #AIpolicy #digitalethics #safetech #generativeAI #regulation
National Business Head and Fund Manager @ Glamorous | Building GLAMOROUS APP. Company Valuation, Fundraising, Financial Advisor, stock market educator and trader.
💁♀️The recent directive issued by India’s Ministry of Electronics and Information Technology (MeitY) to social media platform X underscores a critical moment in the governance of generative AI systems and digital safety. The notice responds to widespread misuse of Grok, the AI chatbot developed by xAI and deployed on X, which has reportedly been leveraged to generate and circulate obscene and sexually explicit content targeting women and children, a misuse that not only contravenes existing legal frameworks but also threatens individual dignity and privacy in online spaces. 📊This development crystallises several pressing imperatives for technologists, regulators, and corporate custodians of AI: 🔹The necessity of robust guardrails and proactive content moderation mechanisms in AI deployments, particularly those capable of generating media. 🔹The importance of aligning platform design with established statutory obligations, such as the Information Technology Act and intermediary guidelines. 🔹The broader ethical obligation to prevent technology from amplifying harm or facilitating violations of consent and personal dignity. The Centre has asked X to remove offending content and submit an action-taken report within 72 hours, warning that failure to comply could affect the platform’s safe-harbour protections. As AI capabilities continue to advance, this episode serves as a timely reminder that innovation must be accompanied by accountability, interdisciplinary governance, and an unwavering commitment to safeguarding human rights.
To view or add a comment, sign in
-
-
Let me tell you about a time when regulations mattered more than ever... Imagine a world where states can't protect children from AI's potential harms. That's where we're heading with Trump's recent executive order—prioritizing corporate interests over child safety. As someone deeply invested in tech and ethics, this concerns me profoundly. Here's the INSIGHT: AI should be designed with SAFETY as a core tenet, not an afterthought. With 78% of consumers worried about AI misuse, the gap in public trust couldn't be clearer. ACTIONABLE STEPS: - Advocate for transparent AI regulations that prioritize consumer and child safety. - Encourage states to collaborate on creating unified, robust AI safety standards. - Support companies that embed ethics into their AI frameworks. We need to align corporate innovation with public welfare. If you're navigating similar waters, remember: together, we can champion a safer digital future. How will you contribute to ethical AI development? https://lnkd.in/gr8fws9X
To view or add a comment, sign in
-
Where AI Should Thrive And Where It Should Be Handled With Care Artificial Intelligence is transforming every industry but not every sector should use AI in the same way. Sectors where AI adds massive value: Healthcare, Finance, Education, Marketing, Manufacturing, and Agriculture because data-driven automation improves speed, accuracy, and productivity. Sectors where AI is risky: Law, Military, Hiring, Journalism where bias, ethics, and human rights require careful human judgment. AI Advantages: Faster processes, cost efficiency, smart decision support. AI Challenges: Bias, job disruption, privacy risks, lack of human empathy. AI dominates digital services and tech. AI hasn’t fully reached governance, informal markets, and deep human-care professions. The future isn’t AI replacing humans It’s AI empowering humans who know how to lead it. Do you agree? #ArtificialIntelligence #FutureOfTech #WomenInTech #TechLeadership #DigitalTransformation #AIinAfrica #Innovation #TechStrategy
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development