“I think it’s quite dangerous to use someone else’s situation and try and predict someone else because you could be right, but you could be massively wrong.”
There has been a step change in the practice of assessing and monitoring families, with police and child protection authorities turning to algorithmic analysis in an effort to forecast which families will need intervention. Digital sources of everyday administrative data on families – from education, health, benefits and police records etc. – are joined together, and algorithmic processing is applied to this linked data in an effort to identify any risk of potential family problems. It is not just an individual family’s data that is used to forecast its own future actions, however; data from all families is drawn into the predictive modelling net and it is other families’ propensities that determine whether or not a family is deemed to pose a future risk to its children. Algorithms scan the merged data set looking for correlations that match patterns of factors in families where undesirable social and criminological outcomes have already happened, in order to identify risk of harm occurring in other families in the future.
Such mass digital predictive monitoring involves a shift in time frames, overturning the established rationales for state intercession in families on the basis of ‘what’s happening’, to intervention in families on an anticipatory basis of ‘what hasn’t happened but might’. Families become interpolated into a digital depersonalised pre-problem space that recasts a hypothetical future as a culpable present.
There has been little attention paid to parents’ views on this use of their administrative and other data – a serious ethical and democratic omission. We held group and individual discussions with parents about their views on predictive analytics as part of our research project exploring parents’ understandings of operational data linkage and use practices. Some parents had experienced family support and intervention services, but others had not.
For the most part, parents could see merit in algorithmic applications to administrative data that identified individual families with difficulties in the present who might mistreat their children: “I think if you’re struggling it’s probably good to be flagged up because obviously the kids are probably at risk.” But there were major concerns raised that centred around accuracy and fairness, stereotyping and prejudice, and determinism and its consequences.
Parents did not have a lot of confidence in the accuracy and fairness of the sources of administrative information that predictive analytics drew upon. They discussed examples of recorded information that was based on biased assumptions and judgemental views. Black parents had especially strong concerns about unjustified racialised labelling of parents and children: “Sadly stereotypes happened in my situation which is why I challenged it. I was like no, no, no, you’re not painting us out to be like that.” Flawed data can stay on record, however, with the potential to be fed endlessly into algorithmic data analysis, part of a magnification of injustices in predictive modelling.
Data analytics anticipating problems when in fact a family not only does not have any issues but would not go on to develop them in the future (known as false positives) worried parents: “Something might happen to somebody but there might be a million different ways that things then play out’” Parents were also bothered about families being passed over by predictive analytics because they did not fit the data profile of dysfunctional parents that the algorithm worked with (known as false negatives). Affluent middle-class families were mentioned in this respect.
The idea of positioning parents and children in a ’pre-problem’ space and a deterministic pulling together of the past and the future into the present caused indignation in the face of the potential damage that could be wreaked, with surveillance, intervention and maybe children removed into care. Some parents conjured up potentially dystopian scenarios: “Terminator 2 stuff’, ‘Big brother watching’” to convey their discomfort.
But parents could also be sceptical about how families propelled into the pre-problem space were going to receive interventions in the face of diminished funding for service provision and staff shortages: “The reality is services have been cut till there’s nothing there. To say that you’re going to put that information into an algorithm to identify needs when there are no services for them to access, there’s no help they can have, feels really disingenuous”.
At the very least, parents’ apprehensions about the erroneous and prejudiced data that can be fed into predictive models, and worries about the determinism and accuracy of predictive analysis, should be met by three main provisions: rights for individuals to view personal data held about them online; permission for parents to report data errors and receive correction; and independent review of the accuracy and utility of predictive models. Ultimately, though, a serious public conversation needs to take place about the legitimacy of state construction of a ‘pre-problem family’ space.
Rosalind Edwards is Professor of Sociology at the University of Southampton, UK. Val Gillies is Professor of Professor of Social Policy and Criminology at the University of Westminster, UK. Sarah Gorin is an Assistant Professor at the University of Warwick, UK.
Pre-problem families: predictive analytics and the future as the present by Rosalind Edwards, Val Gillies, Sarah Gorin and Hélène Vannier-Ducasse for the Families, Relationships and Societies is available on the Bristol University Press Digital here.
Follow Transforming Society so we can let you know when new articles publish.
The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of Bristol University Press and/or any/all contributors to this site.
Image credit: ibuki Tsubo via Unsplash