Alarm swirling around Google’s AI health summaries, which produce misleading and sometimes fabricated advice, and the rise of ‘chatbot therapy’ presents a social problem as one of dangerous new technology. In fact, AI has simply made visible a crisis that was already there. People in the UK are turning to platforms for mental health advice, housing rights, welfare guidance and domestic abuse support because the institutions that were meant to provide those things have been systematically stripped back.
Over past decades, the institutions that once underpinned limited forms of social citizenship have been severely eroded. Local authority spending on youth services in England fell by 74 per cent in real terms between 2010-11 and 2020-21. Child and Adolescent Mental Health Services (CAMHS) has become synonymous with delay and rationing. More than 270,000 children in England were waiting for mental health support in 2022–23, with nearly 40,000 waiting over two years for treatment. A 2024 analysis of NHS data found a 53 per cent increase in emergency and urgent referrals to CAMHS crisis teams over three years, with over 600 children a week now reaching crisis point while awaiting support.
Job centres, reshaped by conditionality and sanctions, frequently operate less as sites of support than as instruments of surveillance and discipline, as documented in the government’s own research on the sanctions regime. Cuts to citizens advice centres have left many people with nowhere to turn to for guidance and support on their legal rights. Funding for homeless shelters has been slashed amid a worsening housing and homelessness crisis.
Into that gap step TikTok videos on PIP appeals, YouTube explainers on Section 21 evictions, Instagram carousels on coercive control, Reddit threads on debt and bailiffs. Teenagers waiting months for CAMHS scroll through instantly available TikTok videos on mental health recovery and ‘healing your inner child’. Survivors of domestic abuse look for guidance on ‘love bombing‘ through Reels.
Some of this content is produced by charities, professionals and advice agencies. Much more is made by individuals drawing on their own experience, or by people selling courses, ebooks and ‘coaching’ – or selling themselves as influencers. The information is uneven, unregulated and often monetised, with advice from qualified professionals and reliable sources lost in a sea of misinformation and false claims. It reaches people because they have nowhere else to go.
Tip of the AIceberg
AI now sits on top of this ecosystem. It can summarise forum posts, generate ‘health advice’ boxes and answer welfare questions in a confident tone that conceals how shaky its sources often are. When vulnerable individuals are fed information that is frequently incorrect, it can perpetuate cycles of harm. The issue is exacerbated by algorithms but machines are not alone in being prone to error and bias. Human-written content is often unreliable too.
The deeper problem is that for‑profit platforms and their recommendation systems have been allowed to become part of the basic infrastructure through which people try to access information, without having any of the obligations to accuracy and safety that are embedded in public services.
AI is a new layer on top of an old story of austerity, privatisation and a politics that treats survival as an individual problem to be solved with self-help content and good ‘scripts’
The consequences of this reality are visible not only on TikTok or Reddit but in long-standing forums such as Mumsnet, which combines genuine, often life-saving peer support with threads where speculation and misinformation about health, schooling or social services can circulate at scale – underpinned by advertising and premium memberships. These spaces were not built with cynical intent, but they mirror older gendered norms in which informal ‘mums’ networks’ are expected to absorb the loss of crumbling public systems. The only difference is that now that labour is folded into a profit-driven platform economy.
Algorithms optimise for engagement, which brings in profit instead of accuracy or care. Misinformation also creates controversy, which online drives interaction, engagement and thus profit. A sensational welfare ‘hack’ is more likely to spread than a cautious explainer made by an under‑funded advice centre. A dramatic story of medical neglect will travel faster than a link to an NHS guideline.
Controversy, when stirred, creates further reach, and such content becomes prioritised by AI tools and other algorithms. When those tools draw on this human-made material, they amplify the skew that is already there. Danger arises because health, housing and welfare information has been outsourced to systems that are not accountable to the people who rely on them.
Profit before people
Volunteerism has long been held up as the answer to the government’s hollowing-out of public services. But asking volunteers, charities and informal peer networks to plug systemic gaps is too great of a task. The demand is structural and continuous; voluntary work is fragile, episodic and shaped by people’s own exhaustion and responsibilities. Volunteerist rhetorics simply cannot be the answer to austerity – a point proven by the well-documented failures of David Cameron’s Big Society 2010 campaign and its painful legacies.
Nonetheless, unpaid labour holds up many digital safety nets. Women, people of colour, disabled people and survivors carry much of the unpaid practical and emotional work that goes on in social media comment sections, online forums and direct messages. They talk strangers through safety plans, benefit forms, eviction letters; intervene to correct misunderstandings and debate options. Where content is not hidden behind a paywall, the forced implementation of adverts and cookies allows platforms to extract data and advertising revenue – as well as ‘content’ – from those very interactions.
Meanwhile, as the state withdraws, specialist services struggle to secure stable funding and are often excluded from commissioning frameworks – especially those led by and for marginalised groups. Prioritising front-line services, these support networks often lack budgets to hire SEO and digital marketing specialists, so their services remain far less visible than the for-profit competition, reinforcing their exclusion from digital spaces and leaving them with little reach.
No sticking plaster response
State response to the dangerous spread of misleading and dangerous health, legal and welfare guidance cannot be limited to better ‘AI governance’ while leaving the structural causes intact. There is a need for public, enforceable standards for any system – human or machine‑driven – that delivers information at scale. That means transparency about how content is ranked and recommended; independent oversight and routes to redress when things go wrong and clear duties not to profit from directing people in crisis towards predatory services.
Above all, it requires rebuilding the public services whose reduction and absence have made a platform-mediated public services system possible in the first place. CAMHS, legal aid, refuges, housing advice and youth work urgently need stable, sufficient funding and democratic control. Online spaces will remain important for mutual aid, organising and information-sharing. But they should supplement, not substitute for, reliable and accountable state services.
AI is not the root cause of this crisis. It is a new layer on top of an old story of austerity, privatisation and a politics that treats survival as an individual problem to be solved with self-help content and good ‘scripts’. As long as that continues, the question will not be whether people trust AI, but why they are forced to ask an algorithm for help at all.










