|
Torkel Klingberg, Karolinska Institutet and Samson Nivins, Karolinska Institutet The digital revolution has become a vast, unplanned experiment – and children are its most exposed participants. As ADHD diagnoses rise around the world, a key question has emerged: could the growing use of digital devices be playing a role? To explore this, we studied more than 8,000 children, from when they were around ten until they were 14 years of age. We asked them about their digital habits and grouped them into three categories: gaming, TV/video (YouTube, say) and social media. The latter included apps such as TikTok, Instagram, Snapchat, X, Messenger and Facebook. We then analysed whether usage was associated with long-term change in the two core symptoms of ADHD: inattentiveness and hyperactivity. Our main finding was that social media use was associated with a gradual increase in inattentiveness. Gaming or watching videos was not. These patterns remained the same even after accounting for children’s genetic risk for ADHD and their families’ income. We also tested whether inattentiveness might cause children to use more social media instead. It didn’t. The direction ran one way: social media use predicted later inattentiveness. The mechanisms of how digital media affects attention are unknown. But the lack of negative effect of other screen activities means we can rule out any general, negative effect of screens as well as the popular notion that all digital media produces “dopamine hits”, which then mess with children’s attention. As cognitive neuroscientists, we could make an educated guess about the mechanisms. Social media introduces constant distractions, preventing sustained attention to any task. If it is not the messages themselves that distract, the mere thought of whether a message has arrived can act as a mental distraction. These distractions impair focus in the moment, and when they persist for months or years, they may also have long-term effects. Gaming, on the other hand, takes place during limited sessions, not throughout the day, and involves a constant focus on one task at a time. The effect of social media, using statistical measures, was not large. It was not enough to push a person with normal attention into ADHD territory. But if the entire population becomes more inattentive, many will cross the diagnostic border. Theoretically, an increase of one hour of social media use in the entire population would increase the diagnoses by about 30%. This is admittedly a simplification, since diagnoses depend on many factors, but it illustrates how even an effect that is small at the individual level can have a significant effect when it affects an entire population. A lot of data suggests that we have seen at least one hour more per day of social media during the last decade or two. Twenty years ago, social media barely existed. Now, teenagers are online for about five hours per day, mostly with social media. The percentage of teenagers who claim to be “constantly online” has increased from 24% in 2015 to 46% 2023. Given that social media use has risen from essentially zero to around five hours per day, it may explain a substantial part of the increase in ADHD diagnoses during the past 15 years. The attention gapSome argue that the rise in the number of ADHD diagnoses reflects greater awareness and reduced stigma. That may be part of the story, but it doesn’t rule out a genuine increase in inattention. Also, some studies that claim that the symptoms of inattention have not increased have often studied children who were probably too young to own a smartphone, or a period of years that mostly predates the avalanche in scrolling. Social media probably increases inattention, and social media use has rocketed. What now? The US requires children to be at least 13 to create an account on most social platforms, but these restrictions are easy to outsmart. Australia is currently going the furthest. From December 10 2025, media companies will be required to ensure that users are 16 years or above, with high penalties for the companies that do not adhere. Let’s see what effect that legislation will have. Perhaps the rest of the world should follow the Australians. Torkel Klingberg, Professor of Cognitive Neuroscience, Karolinska Institutet and Samson Nivins, Postdoctoral Researcher, Women's and Children's Health, Karolinska Institutet This article is republished from The Conversation under a Creative Commons license. Read the original article. |
Social media, not gaming, tied to rising attention problems in teens, new study finds (2025-12-19T11:13:00+05:30)
Caregiver smartphone use can affect a baby’s development. New parents should get more guidance (2025-12-17T11:27:00+05:30)
Miriam McCaleb, University of CanterburyWe already know excessive smartphone use affects people’s mental health and their relationships. But when new parents use digital technologies during care giving, they might also compromise their baby’s development. Smartphone use in the presence of infants is associated with a range of negative developmental outcomes, including threats to the formation of a secure attachment. The transition into parenthood is an ideal time for healthy behaviour change. Expectant parents see a range of professionals, but as we found in our new study, they don’t receive any co-ordinated support or advice on managing digital devices in babies’ presence. One of the new mums we interviewed said:
Another participant said:
Adult smartphone use is not mentioned in well-child checks. We argue this is a missed public health opportunity. Secure attachment is important for a baby’s development. They need hours of gazing at their families’ faces to optimally wire their brains. This is more likely when the parent is sensitive to a baby’s cues and emotionally available. But ubiquitous smartphone use by caregivers has the potential to disrupt attachment by interrupting this sensitivity and availability. Babies’ central nervous system and senses are immature. But they are born into a rapidly moving world, filled with voices and faces from digital sources. This places a burden on caregivers to act as a human filter between a newborn’s neurobiology and digital distractions. Disrupting relationshipsPsychologists have described the phenomenon of frequent disruptions and distractions during parenting – and the disconnection of the in-person relationship – as “technoference”. A caregiver’s eyes are no longer on the infant but on the device. Their attention is gone, in a state described as “absent presence”, and the phone becomes a “social pollution”. It’s unpleasant for anyone on the other side of this imbalance. But for babies, whose connection to their significant adults is the only thing that can make them feel safe enough to learn and grow optimally, it causes disproportionate harm because of their vulnerable developmental stage. During the rapid phase of brain growth in infancy, babies are wired to seek messages of safety from their caregiver’s face. Smartphone use blanks caregivers’ facial expressions in ways that cause physiological stress to babies. When a caregiver uses their phone while feeding an infant, babies are more likely to be overfed. The number of audible notifications on a parent’s device relates to a child’s language development, with more alerts associated with fewer words at 18 months. If that’s not reason enough to reign in phone use, evidence also shows that smartphone use can be a source of stress and guilt for parents. This suggests parents themselves would benefit from more purposeful and reduced smartphone habits. Some public health researchers are urging healthcare workers to consider the parent-infant relationship in addition to the respective health of the baby and caregiver themselves. This relational space between people is suffering as a result of the social pollution of smartphone-distracted care. Babies’ brains grow so fast, we mustn’t let this process be compromised by the distraction of the attention economy. Our research shows new parents could use information and support around the use of digital devices. We also recommend that other family members modify their smartphone habits around a new baby. Whānau can create a family media plan and make sure they have someone to talk to about this issue. Health policies should focus on early investment in parents and children, by prioritising education and action on smartphone use around babies. This would benefit the wellbeing of new parents and the lifelong development of infants. Miriam McCaleb, Fellow in Public Health, University of Canterbury This article is republished from The Conversation under a Creative Commons license. Read the original article. |
Screenagers face troubling addictions from an early age (2025-12-12T11:27:00+05:30)
Early exposure can lead to addiction. Brit., CC BY-NC-ND
Joseph Attard, King's College London and Mark Griffiths, Nottingham Trent UniversityIn 1997, Douglas Rushkoff boldly predicted the emergence a new caste of tech-literate adolescents. He argued that the children of his day would soon blossom into “screenagers”, endowed with effortless advantages over their parents, having been raised from birth on a diet of computers and micro-chipped devices. Fast-forward to 2014: the screenagers have come of age in a world ruled by Twitter and Candy Crush Saga. A substantial body of evidence addresses the ways in which media saturation shapes the identities of children and adolescents. While there are clear benefits to maturing as a digital native, a number of experts are concerned about the physical and psychological health of our screenagers. The perils of media-immersionThere are advantages of tech-literacy from an early age such as gaining IT skills that will serve you well in the future but there are risks too. Aside from the dangers of social isolation and physical inactivity, there are also dangers that come not directly from any IT medium itself, but what happens when children are exposed to them. The ability to access pornography or gamble online throws up all kinds of issues when children are involved. Particularly insidious are “foot-in-the-door” products which, combined with big data marketing techniques, specifically target adolescents and stimulate pathological behaviour. For example, a number of free Facebook games, including Zynga Poker and Slotomania, normalise gambling and divorce the thrill of playing from the consequences of losing. The player gets to experience the highs of winning but because there is no money involved, they don’t suffer any real life consequences when they lose. This poses a major risk and could lead to problem gambling in adolescence. Other freemium app and internet games also carry a risk factor for pathological behaviour. So-called “casual games” such as Flappy Bird, Bejeweled and Candy Crush Saga use behavioural conditioning techniques to keep players invested for long stretches, which may inhibit the social development of youngsters. And even if we don’t buy into the moral panic so often spread by the media, there is evidence to suggest that sustained access to pornography can have detrimental effects on young people. Mental health website Psych Central reports that not only is pornography easy to stumble across online (with search terms like “toy” often throwing up adult images) repeated exposure can be over-stimulating and potentially addictive for young people. According to the site, “Cybersex addiction functions in a similar way to any other addiction, leading to a cycle of preoccupation, compulsion, acting out, isolation, self-absorption, shame and depression as well as distorted views of real relationships and intimacy.” Most susceptible to compulsive porn viewing are teens with limited parental support, which also correlates with unsupervised web access. New addictionsWhile the addictiveness of certain activities is reasonably well established, the more general concept of “media addiction” in young people is harder to pin down. For a start, it isn’t easy to define addiction as it applies to any activity, even traditional problems such as gambling. So when it comes to new technologies and services, the picture becomes more confused. It is tempting to discuss “media addiction” as a catch-all term for spending too much time online but there are so many opportunities for digital natives to engage in harmful activities that we ought to think in more detail about the problems that can arise for them. While we might group people together as “Facebook addicts”, for example, there may well be a big difference between someone who spends an unhealthy amount of time growing virtual tomatoes on Farmville and another who might be pathologically engrossed in instant messaging. Starting youngThese phenomena are disconcerting enough on their own but we also need to address the fact that for today’s youngsters, the process of media immersion often begins in very early childhood. Last year, campaign group Common Sense Media and electronics company VTech carried out a survey of 1,463 parents with children aged under eight in the US and found 75% had access to smart devices. This was up from 52% in 2011. This suggests that by the time they hit their teens, there is a high probability that young children will be active participants in global information networks. Whereas children of the 1990s were raised on a diet of discontinuous digital media (MTV and 16-bit gaming), the next wave of screenagers will hold multiple social media accounts, exposing them to all the hazards this level of connectivity implies. From underage users viewing gambling as a source of wealth to adolescents whose formative sexuality is filtered through internet porn, the influence of media-immersion on developing minds is disquieting. One can only imagine the mental state of young people when a universe of information, temptations and perils can be carried around in their pocket. While it’s obvious that internet-use carries huge advantages for young people, they also need to be educated about the dangers before addictions develop. Joseph Attard, Film Studies PhD Researcher, King's College London and Mark Griffiths, Director of the International Gaming Research Unit and Professor of Gambling Studies, Nottingham Trent University This article is republished from The Conversation under a Creative Commons license. Read the original article. |
Australia’s social media ban is now in force. Other countries are closely watching what happens (2025-12-11T10:50:00+05:30)
|
After months of anticipation and debate, Australia’s social media ban is now in force. Young Australians under 16 must now come to grips with the new reality of being unable to have an account on some social media platforms, including Instagram, TikTok and Facebook. Only time will tell whether this bold, world-first experiment will succeed. Despite this, many countries are already considering following Australia’s lead. But there are other jurisdictions that are taking a different approach to try and keep young people safe online. Here’s what’s happening overseas. A global movementIn November, the European parliament called for a similar social media ban for under 16s. The President of the European Commission, Ursula von der Leyen, said she has been studying Australia’s restrictions and how they address what she described as “algorithms that prey on children’s vulnerabilities”, leaving parents feeling powerless against “the tsunami of big tech flooding their homes”. In October, New Zealand announced it would introduce similar legislation to Australia’s, following the work of a parliamentary committee to examine how best to address harm on social media platforms. The committee’s report will be released in early 2026. Pakistan and India are aiming to reduce children’s exposure to harmful content by introducing rules requiring parental consent and age verification for platform access, alongside content moderation expectations for tech companies. Malaysia has announced it will ban children under 16 from social media starting in 2026. This follows the country requiring social media and messaging platforms with eight million or more users to obtain licenses to operate, and use age verification and content-safety measures from January 2025. France is also considering a social media ban for children under 15 and a 10pm to 8am curfew for platform use for 15- to 18-year-olds. These are among 43 recommendations made by a French inquiry in September 2025, which also recommended banning smartphones in schools, and implementing a crime of “digital negligence for parents who fail to protect their children”. While France introduced a requirement in 2023 that platforms obtain parental consent for children under 15 to create social media accounts, it has yet to be enforced. This is also the case in Germany. There, children aged between 13 and 16 can only access platforms with parental consent, but without formal checks in place. And, in Spain, the minimum age for social media accounts will rise from 14 to 16, unless parents provide consent. Norway announced plans in July to restrict access to social media for under 15s. The government explained the law would be “designed in accordance with children’s fundamental rights, including freedom of expression, access to information, and the right to association”. In November, Denmark announced it would “ban access to social media for anyone under 15”. However, unlike Australia’s legislation, parents can override the rules to enable 13- and 14-year-olds to retain platform access. Yet there is no date for implementation, with lawmakers expected to take months to pass the legislation. It’s also unclear how Denmark’s ban will be enforced. But the country does have a national digital ID program that may be used. In July, Denmark was named as part of a pilot program (with Greece, France, Spain, and Italy) to trial an age verification app that could be launched across the European Union for use by adult content sites and other digital providers. Some pushbackThe implementation of similar restrictions is not being taken up everywhere. For example, South Korea has decided against a social media ban for children. But it will ban the use of mobile phones and other devices in classrooms starting in March 2026. In the city of Toyoake (south-west of Tokyo, Japan), a very different solution has been proposed. The city’s mayor, Masafumi Koki, issued an ordinance in October, limiting the use of smartphones, tablets, and computers to two hours per day for people of all ages. Koki is aware of Australia’s social media restrictions. But as he explained:
While the ordinance has faced backlash, and is non-binding, it prompted 40% of residents to reflect on their behaviour, with 10% reducing their time on smartphones. In the United States, the opposition to Australia’s social media restrictions has been extremely vocal and significant. American media and technology companies have urged President Donald Trump to “reprimand” Australia over its legislation. They argue American companies are being unfairly targeted and have lodged formal complaints with the Office of US Trade. President Trump has stated he would stand up to any countries that “attacked” American technology companies. The US recently called eSafety Commissioner Julie Inman-Grant to testify in front of Congress. US Republican Jim Jordan claimed her enforcement of Australia’s Online Safety Act “imposes obligations on American companies and threatens speech of American citizens”, which Inman-Grant strongly denied. The world will keep watchingWhile much of the world seems united in concern about the harmful content and algorithmic features children experience on social media, only one thing is clear – there is no silver bullet for addressing these harms. There is no agreed set of restrictions, or specific age at which legislators agree children should have unrestricted access to these platforms. Many countries outside Australia are empowering parents to provide access, if they believe it is right for their children. And many countries are considering how best to enforce restrictions, if they implement similar rules. As experts point to the technical challenges in enforcing Australia’s restrictions, and as young Australians consider workarounds to maintain their accounts or find new platforms to use, other countries will continue to watch and plan their next moves. Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University This article is republished from The Conversation under a Creative Commons license. Read the original article. |
What AI earbuds can’t replace: The value of learning another language (2025-12-08T12:48:00+05:30)
Gabriel Guillén, Middlebury College and Thor Sawin, Middlebury CollegeYour host in Osaka, Japan, slips on a pair of headphones and suddenly hears your words transformed into flawless Kansai Japanese. Even better, their reply in their native tongue comes through perfectly clear to you. Thanks to artificial intelligence, neither of you is lost in translation. What once seemed like science fiction is now marketed as a quick fix for cross-cultural communication. Such AI-powered tools will be useful for many people, especially for tourists or in any purely transactional situation, even if seamless automatic interpretation remains at an experimental stage. Does this mean the process of learning another language will soon be a thing of the past? As scholars of computer-assisted language learning and linguistics, we disagree and see language learning as vital in other ways. We have devoted our careers to this field because we deeply believe in the lasting and transformative value of learning and speaking languages beyond one’s mother tongue. Lessons from past language ‘disruptions’This isn’t the first time a new technology has promised massive disruption to learning languages. In recent years, language learning startups such as Duolingo aimed to make acquiring a language easier than ever, in part by gamifying language. While these apps have certainly made learning more accessible to more people, our research shows most platforms and apps have failed to fully replicate the inherently social process of learning a language. The meaning of learning a languageNumbers aside, the gold standard of language learning is the ability to follow and contribute to a live group conversation. Since World War II, government departments and education programs recognized that text-centered grammar-translation methods did little to support real interaction. Interpersonal conversational competence gradually became the main goal of language classes. While technologies you can put in your ear or wear on your face now promise to revolutionize interpersonal interaction, their usefulness in such conversations actually falls along a spectrum. At one end, you have simple tasks you have to navigate while visiting a city where they speak a different language, like checking out of a hotel, buying a ticket at a kiosk or finding your way around town. That is, people from different backgrounds working together to achieve a goal – a successful checkout, a ticket purchase or getting to the famous museum you want to visit. Any mix of languages, gestures or tools – even AI tools – can help in this context. In such cases, where the goal is clear and both parties are patient, shared English or automated interpretation can get the job done while bypassing the hard work of language learning. At the other end, identity matters as much as content. Meeting your in-laws, introducing yourself at work, welcoming a delegation or presenting to a skeptical audience all involve trust and social capital. Humor, idioms, levels of formality, tone, timing and body language shape not just what you say but who you are. The effort of learning a language communicates respect, trust and a willingness to see the world through someone else’s eyes. We believe language learning is one of the most demanding and rewarding forms of deep work, building cognitive resilience, empathy, identity and community in ways technology struggles to replicate. The 2003 movie “Lost in Translation,” which depicts an older American man falling in love with a much younger American woman, was not about getting lost in the language but delved into issues of interculturality and finding yourself while exposed to the other. Indeed, accelerating mobility due to climate migration, remote work and retirement abroad all increase the need to learn languages – not just translate them. Even those staying in place often seek deeper connections through language as learners with familial and historical ties. A Spanish learner from China negotiates meaning with an English learner from Mexico in California. Gabriel Guillén, 2025, CC BY-SAWhere AI falls shortThe latest AI technologies, such as those used by Apple’s newest AirPods to instantly interpret and translate, certainly are powerful tools that will help a lot of people interact with anyone who speaks a different language in ways previously only possible for someone who spent a year or two studying it. It’s like having your own personal interpreter. Yet relying on interpretation carries hidden costs: distortion of meaning, loss of interactive nuance and diminished interpersonal trust. An ethnography of American learners with strong motivation and near limitless support found that falling back on speaking English and using technology to aid translation may be easier in the short term, but this undercuts long-term language and integration goals. Language learners constantly face this choice between short-term ease and long-term impact. Some AI tools help accomplish immediate tasks, and generative AI apps can support acquisition but can take away the negotiations of meaning from which durable skills emerge. AI interpretation may suffice for one-on-one conversations, but learners usually aspire to join ongoing conversations already being had among speakers of another language. Long-term language learning, while necessarily friction-filled, is nevertheless beneficial on many fronts. Interpersonally, using another’s language fosters both cultural and cognitive empathy. In addition, the cognitive benefits of multilingualism are equally well documented: resistance to dementia, divergent thinking, flexibility in shifting attention, acceptance of multiple perspectives and explanations, and reduced bias in reasoning. The very attributes companies seek in the AI age – resilience, lifelong learning, analytical and creative thinking, active listening – are all cultivated through language learning. Rethinking language education in the age of AISo why, in the increasingly multilingual U.K. and U.S., are fewer students choosing to learn another language in high school and at university? The reasons are complex. Too often, institutions have struggled to demonstrate the relevance of language studies. Yet innovative approaches abound, from integrating language in the contexts of other subjects and linking it to service and volunteering to connecting students with others through virtual exchanges or community partners via project-based language learning, all while developing intercultural skills. So, again, what’s the value of learning another language when AI can handle tourism phrases, casual conversation and city navigation? The answer, in our view, lies not in fleeting encounters but in cultivating enduring capacities: curiosity, empathy, deeper understanding of others, the reshaping of identity and the promise of lasting cognitive growth. For educators, the call is clear. Generative AI can take on rote and transactional tasks while excelling at error correction, adapting input and vocabulary support. That frees classroom time for multiparty, culturally rich and nuanced conversation. Teaching approaches grounded in interculturality, embodied communication, play and relationship building will thrive. Learning this way enables learners to critically evaluate what AI earbuds or chatbots create, to join authentic conversations and to experience the full benefits of long-term language learning. Gabriel Guillén, Professor of Language Studies, Middlebury College and Thor Sawin, Professor of Linguistics, Middlebury College This article is republished from The Conversation under a Creative Commons license. Read the original article. |
How parents and teens can reduce the impact of social media on youth well-being (2025-12-04T12:10:00+05:30)
|
Christine Grové, Monash University Knowing how to navigate the online social networking world is crucial for parents and teens. Being educated and talking about online experiences can help reduce any negative impacts on youth mental health and well-being. The Australian Psychology Society (APS) recently released a national survey looking at the impact of technology and social media on the well-being of Australians. Around 1,000 adults over the age of 18 and 150 young people aged 14-17 years took part. The survey found more than three in four young people (78.8%) and more than half of all adults (54%) were highly involved with their mobile phones. Young people are reportedly using social media for an average of 3.3 hours each day, on five or more days of the week. The vast majority of adults and teenagers reported their screens and social media accounts were a positive part of their lives. Many use social media channels to connect with family, friends and to entertain themselves. Too much social media use can effect self-esteemDespite social media playing a positive role for most, the survey found the high use of social media and technology can have a negative impact on youth self-esteem. Two in three young people feel pressure to look good and nearly a third of youth have been bullied online. Nearly half (42%) of frequent users look at social media in bed before sleeping. The survey also found 15% of teenagers reported being approached by strangers on a daily basis through their online world. Around 60% of parents never monitor their teen’s social media account and are wrestling their own issues about how much is too much screen time. Most are unsure of how to provide good guidance of appropriate social media use with their teens. Engage with your teen’s online worldParents and teens need to be informed about engaging with the online world. Parents can ask their teen to show them how they use social media and what it is. Try to navigate the social world together, rather than acting as a supervisor. Ask your teen to help you understand how they use the internet so you can make good decisions about social media use together. Here are a few tips to connect with your teen’s online world:
Difficult conversations about social mediaAn important step in navigating the risks of social networking is to have ongoing conversations about social media use with your teens. If you’re already engaged in your teen’s online world, it will be easier to have difficult conversations about some of the risks and ways to manage them. Many people believe internet browsing is anonymous. Educate your teen about their digital reputation. Whenever your teen visits a website, shares content, posts something on a blog or uploads information, they’re adding to their digital footprint. This information can be gathered under their real name and possibly accessed by future employers or marketing departments. This can happen without you or your teen knowing. Protecting your personal information and knowing it’s not truly anonymous are important conversations to have together. Cyberbullying can occur if online users try to intimidate, exclude or humiliate others online through abusive texts or emails, hurtful messages, images or videos, or online gossip and chat. Let your teen know to try not to retaliate or respond, and to speak to a trusted adult right away. Aim to block the bully and report the behaviour to the social media platform. Create a family media plan to help manage social media use with options to create different guidelines for each teen. In the plan, promote healthy technology use habits with your teen. This includes not using technology too close to bed time. Research showsusing technology at night can have a negative impact on sleep quality. Try to not to use technology for around 30 minutes to an hour before bedtime. Consider using devices in the living spaces in the house rather than in the bedroom when it’s time to go to sleep. Here’s some more information on how to talk to your teens about their internet use, and thriving in an online age. Christine Grové, Educational Psychologist and Lecturer, Monash University This article is republished from The Conversation under a Creative Commons license. Read the original article. |
Cybersecurity experts call for AI-driven defence (2025-12-03T12:50:00+05:30)
|
GUWAHATI, (MExN): Citing growing instances of cyber attacks from “non-friendly countries,” cybersecurity experts called for a resilient AI-driven defence mechanism and investigation process, alongside greater public awareness. At a national conference, experts highlighted that IT system constraints, too many disconnected threads, and limited remote capabilities were holding back investigations, which could be fast-tracked with the adoption of proper AI tools and skilling of users. The conference on 'Cybersecurity, Digital Forensics and Intelligence' was organised by the National Institute of Electronics and Information Technology (NIELIT) at Gauhati University. Citing alarming data, Keshri Kumar Asthana, Head of Public Sector at Microsoft, said, “India lost Rs 22,845 crore to cyber fraud in 2024, a 205.6 per cent surge from the previous year.” He stated that over 36 lakh financial fraud cases were reported during the year, and around 20.5 lakh cybersecurity incidents were reported to CERT-In in 2024, up from 15.9 lakh in 2023. Asthana added that the average cost of a data breach in India in 2025 is Rs 22 crore, the highest on record, driven by gaps in governance and security. “Around 83 per cent of organizations experience more than one data breach in their lifetime. The cost is high as the incidents are being caught late,” he said. He emphasised that defenders must adopt a “graphical thinking” approach to counter attackers who already use such methods. Sandesh Jadhav, Global Data Privacy Officer of Wipro, cautioned people to be vigilant while using social media and digital platforms, warning, “You are being watched continuously.” Shreekrishna Ashutosh of Cellebrite pointed to operational challenges, noting that 50 per cent of agencies report case backlogs yearly, and 60 per cent of investigators still rely on outdated methods. “The average time spent per case reviewing digital evidence is 69 hours,” he said. Asthana underscored the critical role of digital evidence, stating, “90 per cent of criminal cases include digital evidence and 98 per cent of prosecutors say it is pivotal. Digital evidence is no longer optional but essential.” The two-day event is being organised by NIELIT Assam & Nagaland under the Ministry of Electronics & Information Technology (MeitY), in association with Assam Police and Gauhati University. The conference is being held under the theme “Cyber Secure Bharat: Fortifying India’s Digital Future.” Delivering the welcome address, L Lanuwabang, Director of NIELIT Assam & Nagaland and Conference Chair, highlighted the expansion of the conference to Guwahati to ensure wider participation across the North-East. He emphasised the need for advanced cyber training, digital forensic laboratory infrastructure, coordinated cyber investigations, and multi-agency collaboration. “Cyber Secure Bharat is not merely a theme; it is a national mission. A secure India is the foundation of a strong digital future,” Lanuwabang remarked. The inaugural ceremony was attended by K S Gopinath Narayan, Principal Secretary, IT, Government of Assam; Prof Nani Gopal Mahanta, Vice Chancellor, Gauhati University; and Surendra Kumar, Additional Director General of Police, Assam, among others. This year's conference features over 30 speakers from industry, defence, law enforcement, and academia, with more than 300 delegates in attendance.The inaugural programme concluded with a vote of thanks delivered by Santanu Borgohain, Additional Director, NIELIT Guwahati. The second day will feature technical sessions and panel discussions on emerging threats and cybercrime trends. Cybersecurity experts call for AI-driven defence | MorungExpress | morungexpress.com |
Australia is facing an ‘AI divide’, new national survey shows (2025-11-24T14:00:00+05:30)
Kieran Hegarty, RMIT University; Anthony McCosker, Swinburne University of Technology; Jenny Kennedy, RMIT University; Julian Thomas, RMIT University, and Sharon Parkinson, Swinburne University of TechnologyIn the short time since OpenAI launched ChatGPT in November 2022, generative artificial intelligence (AI) products have become increasingly ubiquitous and advanced. These machines aren’t limited to text – they can now generate photos, videos and audio in a way that’s blurring the line between what’s real and what’s not. They’ve also been woven into tools and services many people already use, such as Google Search. But who is – and isn’t – using this technology in Australia? Our national survey, released today, provides some answers. The data is the first of its kind. It shows that while almost half of Australians have used generative AI, uptake is uneven across the country. This raises the risk of a new “AI divide” which threatens to deepen existing social and economic inequalities. A growing divideThe “digital divide” refers to the gap between people or groups who have access to, can afford and make effective use of digital technologies and the internet, and those who cannot. These divides can compound other inequalities, cutting people off from vital services and opportunities. Because these gaps shape how people engage with new tools, there’s a risk the same patterns will emerge around AI adoption and use. Concerns about an AI divide – raised by bodies such as the United Nations – are no longer speculative. International evidence is starting to illustrate a divide in capabilities between and within countries, and across industries. Who we heard fromEvery two years, we use the Australian Internet Usage Survey to find out who uses the internet in Australia, what benefits they get from it, and what barriers exist to using it effectively. We use these data to develop the Australian Digital Inclusion Index – a long-standing measure of digital inclusion in Australia. In 2024, more than 5,500 adults across all Australian states and territories responded to questions about whether and how they are using generative AI. This includes a large national sample of First Nations communities, people living in remote and regional locations and those who have never used the internet before. Other surveys have tracked attitudes towards AI and its use. But our study is different: it embeds questions about generative AI use inside a long-standing, nationally representative study of digital inclusion that already measures access, affordability and digital ability. These are the core ingredients people need to benefit from being online. We’re not just asking “who’s trying AI?”. We’re also connecting the use of the technology to the broader conditions that enable or constrain people’s digital lives. Importantly, unlike other studies of AI use in Australia collected via online surveys, our sample also includes people who don’t use the internet, or who may face barriers to filling out a survey online. Australia’s AI divide is already taking shapeWe found 45.6% of Australians have recently used a generative AI tool. This is slightly higher than rates of use identified in a 2024 Australian study (39%). Looking internationally, it is also slightly higher than usage by adults in the United Kingdom (41%), as identified in a 2024 study by the country’s media regulator. Among Australian users, text generation is common (82.6%), followed by image generation (41.5%) and code generation (19.9%). But usage isn’t uniform across the population. For example, younger Australians are more likely to use the technology than their elders. More than two-thirds (69.1%) of 18- to 34-year-olds recently used one of the many available generative AI tools, compared with less than 1 in 6 (15.5%) 65- to 74-year-olds. Students are also heavy users (78.9%). People with a bachelor’s degree (62.2%) are much more likely to use the technology than those who did not complete high school (20.6%). Those who left school in Year 10 (4.2%) are among the lowest users. Professionals (67.9%) and managers (52.2%) are also far more likely to use these tools than machinery operators (26.7%) or labourers (31.8%). This suggests use is strongly linked to occupational roles and work contexts. Among the people who use AI, only 8.6% engage with a chatbot to seek connection. But this figure rises with remoteness. Generative AI users in remote areas are more than twice as likely (19%) as metropolitan users (7.7%) to use AI chatbots for conversation. Some 13.6% of users are paying for premium or subscription generative AI tools, with 18 to 34-year-olds most likely to pay (17.5%), followed by 45 to 54-year-olds (13.3%). Also, people who speak a language other than English at home report significantly higher use (58.1%) than English-only speakers (40.5%). This may be associated with improvements in the capabilities of these tools for translation or accessing information in multiple languages. Bridging the divideThis emerging AI divide presents several risks if it calcifies, including disparities in learning and work, and increased exposure for certain people to scams and misinformation. There are also risks stemming from overreliance on AI for important decisions, and navigating harms related to persuasive AI companions. The biggest challenge will be how to support AI literacy and skills across all groups. This isn’t just about job readiness or productivity. People with lower digital literacy and skills may miss out on AI’s benefits and face a higher risk of being misled by deepfakes and AI-powered scams. These developments can easily dent the confidence of people with lower levels of digital literacy and skills. Concern about harms can see people with limited confidence further withdraw from AI use, restricting their access to important services and opportunities. Monitoring these patterns over time and responding with practical support will help ensure the benefits of AI are shared widely – not only by the most connected and confident. Kieran Hegarty, Research Fellow, ARC Centre of Excellence for Automated Decision-Making & Society, RMIT University; Anthony McCosker, Professor of Media and Communication, Director, Social Innovation Research Institute, Swinburne University of Technology; Jenny Kennedy, Associate Professor, Media and Communications, RMIT University; Julian Thomas, Distinguished Professor of Media and Communications; Director, ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University, and Sharon Parkinson, Senior Research Fellow, Centre for Urban Transitions, Swinburne University of Technology This article is republished from The Conversation under a Creative Commons license. Read the original article. |
Online Taxi Service UraCab expands to Kiphire district (2025-11-18T11:56:00+05:30)
|
Officials and others during the launching programme of UraCab Kiphire Branch at Medical Ward, Kiphire on November 4. Kiphire, (MExN): The launching programme of UraCab Kiphire Branch was held at Medical Ward, Kiphire on November 4. Atouzo Metha, Assistant Fleet Manager, was the Chairperson. The programme began with an invocation prayer by Nancy Sangtam, Associate Secretary of Christian Education, USBLA Sub-Centre. In his welcome address, Keneilezo Rutsa, General Manager of UraCab, shared that UraCab has registered more than 200+ drivers, 4000+ users and completed more than 1500+ rides and counting. He mentioned that UraCab started its journey in 2022 and was formally inaugurated by the Chief Minister of Nagaland in November 2024. Longdiba L Sangtam, NCS, Additional Deputy Commissioner Kiphire, launched UraCab branch in Kiphire. He remarked that the introduction of an online taxi service is a great step forward for Kiphire. Despite the challenges in road connectivity, he said the service will benefit both locals and visitors traveling to Kiphire. He encouraged the UraCab team, especially Manager Longtili C Sangtam, to remain committed and not give up easily. He also mentioned that the administration is always ready to support innovative and entrepreneurial ventures, emphasizing that success comes through hard work and consistency.Delivering the closing remarks, Longtili C Sangtam, Manager of UraCab Kiphire, expressed his gratitude to ADC Kiphire, Nancy Sangtam, Associate Secretary Children Education USBLA Sub-center Kiphire, Executive Chairman USSC, Local Taxi Union, All Nagaland Taxi Association Kiphire Unit, Two-Wheeler Taxi Association Kiphire, Chairman Medical Ward, and the UraCab team for their support in making the programme a success. He added that the launch would not only improve travel facilities but also boost the local economy and promote tourism in Kiphire in a sustainable way. Online Taxi Service UraCab expands to Kiphire district | MorungExpress | morungexpress.com |
Why industry-standard labels for AI in music could change how we listen (2025-11-13T11:32:00+05:30)
Gordon A. Gow, University of Alberta and Brian Fauteux, University of AlbertaEarlier this year, a band called The Velvet Sundown racked up hundreds of thousands of streams on Spotify with retro-pop tracks, generating a million monthly listeners on Spotify. But the band wasn’t real. Every song, image, and even its back story, had been generated by someone using generative AI. For some, it was a clever experiment. For others, it revealed a troubling lack of transparency in music creation, even though the band’s Spotify descriptor was later updated to acknowledge it is composed with AI. In September 2025, Spotify announced it is “helping develop and will support the new industry standard for AI disclosures in music credits developed through DDEX.” DDEX is a not-for-profit membership organization focused on the creation of digital music value chain standards. The company also says it’s focusing work on improved enforcement of impersonation violations and a new spam-filtering system, and that updates are “the latest in a series of changes we’re making to support a more trustworthy music ecosystem for artists, for rights-holders and for listeners.” As AI becomes more embedded in music creation, the challenge is balancing its legitimate creative use with the ethical and economic pressures it introduces. Disclosure is essential not just for accountability, but to give listeners transparent and user-friendly choices in the artists they support. A patchwork of policiesThe music industry’s response to AI has so far been a mix of ad hoc enforcement as platforms grapple with how to manage emerging uses and expectations of AI in music. Apple Music took aim at impersonation when it pulled the viral track “Heart on My Sleeve” featuring AI-cloned vocals of Drake and The Weeknd. The removal was prompted by a copyright complaint reflecting concerns over misuse of artists’ likeness and voice. The indie-facing song promotion platform SubmitHub has introduced measures to combate AI-generated spam. Artists must declare if AI played “a major role” in a track. The platform also has an “AI Song Checker” so playlist curators can scan files to detect AI use. Spotify’s announcement adds another dimension to these efforts. By focusing on disclosure, it recognizes that artists use AI in many different ways across music creation and production. Rather than banning these practices, it opens the door to an AI labelling system that makes them more transparent. Labelling creative contentContent labelling has long been used to help audiences make informed choices about their media consumption. Movies, TV and music come with parental advisories, for example. Digital music files also include embedded information tags called metadata, which include details like genre, tempo and contributing artists that platforms use to categorize songs, calculate royalty payments and to suggest new songs to listeners. Canada has relied on labelling for decades to strengthen its domestic music industry. The MAPL system requires radio stations to play a minimum percentage of Canadian music, using a set of criteria to determine whether a song qualifies as Canadian content based on music, artist, production and lyrics. As more algorithmically generated AI music appears on streaming platforms, an AI disclosure label would give listeners a way to discover music that matches their preferences, whether they’re curious about AI collaboration or drawn to more traditional human-crafted approaches. What could AI music labels address?A disclosure standard will make AI music labelling possible. The next step is cultural: deciding how much information should be shared with listeners, and in what form. According to Spotify, artists and rights-holders will be asked to specify where and how AI contributed to a track. For example, whether it was used for vocals, instrumentation or post-production work such as mixing or mastering. For artists, these details better reflect how AI tools fit into a long tradition of creative use of new technologies. After all, the synthesizer, drum machines and samplers — even the electric guitar — were all once controversial. But AI disclosure shouldn’t give streaming platforms a free pass to flood catalogues with algorithmically generated content. The point should also be to provide information to listeners to help them make more informed choices about what kind of music they want to support. Information about AI use should be easy to see and quickly find. But on Spotify’s Velvet Sundown profile, for example, this is dubious: listeners have to dig down to actually read the band’s descriptor. AI and creative tensions in musicAI in music raises pressing issues, including around labour and compensation, industry power dynamics, as well as licensing and rights. One study commissioned by the International Confederation of Societies of Authors and Composers has said that Gen AI outputs could put 24 per cent of music creators’ revenues at risk by 2028, at a time when many musician careers are already vulnerable to high costs of living and an unpredictable and unstable streaming music economy. The most popular AI music platforms are controlled by major tech companies. Will AI further concentrate creative power, or are there tools that might cut production costs and become widely used by independent artists? Will artists be compensated if their labels are involved in deals for artists’ music to train AI platforms? The cultural perception around musicians having their music train AI platforms or in using AI tools in music production is also a site of creative tension. Enabling listener choiceTurning a disclosure standard into something visible — such as an intuitive label or icon that allows users to go deeper to show how AI was used — would let listeners see at a glance how human and algorithmic contributions combine in a track. Embedded in the digital song file, it could also help fans and arts organizations discover and support music based on the kind of creativity behind it. Ultimately, it’s about giving listeners a choice. A clear, well-designed labelling system could help audiences understand the many ways AI now shapes music, from subtle production tools to fully synthetic vocals. Need for transparencyAs influence of AI in music creation continues to expand, listeners deserve to know how the sounds they love are made — and artists deserve the chance to explain it. Easy-to-understand AI music labels would turn disclosure into something beyond compliance: it might also invite listeners to think more deeply about the creative process behind the music they love. Gordon A. Gow, Director, Media & Technology Studies, University of Alberta and Brian Fauteux, Associate Professor Popular Music and Media Studies, University of Alberta This article is republished from The Conversation under a Creative Commons license. Read the original article. |

Early exposure can lead to addiction.
A Spanish learner from China negotiates meaning with an English learner from Mexico in California. Gabriel Guillén, 2025,