Companies are increasingly interested in selling technologies that will create ‘smart homes’. But what are the implications for children and their parents?
Historically, the debate about children and media technology has always shuttled between utopian fantasy and doom-laden lamentation. There is frequently a point at which optimism gives way to scepticism – not least when new products begin to be widely available on the commercial market. We might be reaching that point right now when it comes to the ‘internet of things’, and its manifestation in the marketing of ‘smart homes’.
The key idea here is that, via wireless technology and artificial intelligence, the devices and appliances in our homes can be connected, and can speak to each other. This is something that enthusiasts have been promising for at least a decade, if not two; although it seems to have taken quite some time to arrive for ordinary consumers.
In taking about ‘smart homes’, we’re not talking primarily about screens (phones or computers) but about a much wider range of objects. Some smart home products are fairly mundane – such as wirelessly controlled security devices, video doorbells, locks, thermostats and lighting; and we are also promised smart fridges, vacuum cleaners and toothbrushes. But the kinds of products now beginning to appear are more ambitious. Many of them use voice-recognition, and many are designed to integrate seamlessly into families’ everyday routines.
Smart home technology is increasingly big business (just Google the term and count the number of sponsored ads that appear). At the start of 2020, Amazon estimated that over 200 million Alexa-enabled devices had been sold, and that Alexa technology had been integrated into 100,000 different smart home products.
Amazon is currently the market leader (with around 70% of market share), but other companies are jostling to get on board. And the business potential is not only about selling more stuff – about gadgets that will apparently make our everyday lives easier; it is also about the gathering and selling of personal data that such devices make possible.
The home of the future
Of course, there’s a long history of fantasies about the home of the future: there was a very thought-provoking exhibition on this theme at London’s Design Museum last year, from which some material is still online. The story of responsive, sentient wireless computers is one that’s evident in Star Trek and 2001: A Space Odyssey, for example. The smart home has elements of a James Bond fantasy (or perhaps the Austin Powers parody) so beloved of adolescent boys – the bachelor pad where invisible technology changes the ambience at the wave of a hand or a command to a digital assistant (and I’m surely not the first person to notice that the default voices of these assistants are almost always female).
At the same time, history is also littered with failed consumer technologies (Google Glass, anyone?). Smart home technologies may be just another example of capitalism trying to sell us things we don’t need or want. Technology companies in particular are very keen to offer us ‘solutions’ to problems we never imagined we had in the first place. For businesses, there is a danger of doing things simply because they can (or because new technologies make them possible) rather than because there’s any evidence of demand.
Yet as consumers, we are all well aware that digital technology is subject to planned obsolescence; that it often loses functionality quite quickly; that it may not perform as efficiently or accurately as its marketers suggest; that it may make everyday life harder rather than easier; and that it may well not last beyond its initial moment of novelty. It may be that smart home products are simply the latest form of fashionable conspicuous consumption – the kind of stuff wealthy people have in their houses in order to show off.
Do we need smart homes?
So, do we need smart homes? And what might we need them for? There seems to have been very little research on this question – at least, aside from studies conducted or funded by companies who are looking to make money from such products. I’ve been looking at the scenarios of ‘smart’ family life that are presented by advertisers, and at the work of advocates like David Rose of MIT – whose book and website Enchanted Objects present some more cutting-edge examples.
Broadly speaking, these products seem to have two main aims. On one level, they are simply about convenience: they claim to make life easier, to help us manage daily routines and demands, and to reduce the effort of mundane domestic tasks. For families, especially with young children, this has undeniable appeal (as the experience of ‘lockdown’ has clearly demonstrated). However, these products are also about the emotional and symbolic aspects of family life: they claim to provide greater safety and security, to encourage togetherness, and to create a more comfortable and relaxed atmosphere.
But where are the limits to this? Do we actually want a life without inconvenience or difficulty or effort, and would it necessarily be good for us? Let’s consider some examples.
Do you want a heating thermostat that you can control remotely? Do you want a device that will monitor your energy usage, and help you to cut your power bills? Do you want to be able to turn your lights on remotely when you’re out of the house, in order to dissuade burglars? Do you want a digital device to water your houseplants when they need it?
Do you want a remote control or a voice-activated PA (Alexa or Siri or Google Assistant) to change the lighting or close the blinds? Do you want to do this at a predetermined time, so you don’t have to shift from the sofa?
Do you want your fridge to tell you when you’re running out of milk? Do you want it to place an order with your milk delivery company? Do you want it to propose recipes and then order the ingredients you need? Do you want a light over your dining room table that gives you nutrition information about what you’re about to eat, or tells you precisely how much salt you’re allowed to put on it?
Do you want a device that will ‘help’ your children with their homework? Do you want it to dim the lights and read them a bedtime story when it’s time for bed? Do you want a device that will programme their morning routines – setting off the alarm clock, playing music to get them moving, reminding them to eat breakfast and brush their teeth and get their school bags ready?
Do you want your kids to have a Teddy Bear (it’s called Teddy the Guardian) with a biometric sensor that monitors the child’s vital signs, sending data direct to your phone?
Do you want a Facebook coffee table that understands keywords from your conversation and then shows relevant digital photos from your collection on its screen? Do you want a jacket that gives you a hug when your friend likes your Facebook post?
Do you want a digital drone that flies around the house recording security video? And as you get into old age, do you want to be conversing with robots or androids?
Of course, different people will have different answers to these questions: some of these devices might be particularly useful for those with disabilities, or with specific needs. Yet for most of us, the key issue is to do with how much control you want to have over your domestic environment, and how far you are willing to cede that control to technology, even technology that you yourself have programmed. And beyond that, how much do you want technology to actively intrude, to guide you, to determine your mood, to set targets and calibrate how you live your life, to tell you what to think and how to behave – even if you’ve already told it to do that? Even if the aim may be to blend in seamlessly, the reality may be very different.
Implications for children
When it comes to children, there may be particular reasons for concern. Predictably, there have been anxieties about safety in this respect – even when it comes to devices that are designed to improve it. Nightmarish incidents of paedophiles hacking into baby monitors have been reported, although they must be exceptionally rare.
However, there are broader questions about family life at stake here. There is a strong tradition of academic research that looks at how technologies are ‘domesticated’. This is partly about how technologies are used and adapted as they fit into family routines and practices – for example in terms of how families manage time, or how different spaces in the home are used (I did some research on this myself some years ago, looking at the technology of video-making). However, this research also explores how family life itself may be changed in the process.
In this instance, we’re talking about technologies that younger children may well be unable to control; and, conversely, technologies that older children may find much easier to control than their parents do. As a new technology enters the scene, differences in technological expertise may change power relationships within the family in ways that we may not necessarily want.
At the same time, many of these technologies permit greater surveillance. I mean this in two senses. Firstly (as with several of the examples I’ve mentioned), these devices promise to provide much more fine-grained, intensive monitoring and control of children by their parents. Today’s parents are increasingly charged with the responsibility for securing their children’s healthy physical and cognitive development. Devices that seem to make it possible to be a ‘good parent’ in this respect may be particularly profitable.
But how far do we want this to go? Many parents would see the point of putting a camera in their baby’s crib so they can keep watch at night (one such product is cosily called the ‘Nest’); or even of installing smart sensors when the children are able to wander about the house at night. But many would probably be much more wary of Teddy the Guardian, or of security drones. Some parents of older children might be interested in devices that monitor what’s going on in their bedrooms, or on their digital screens – and indeed in tracking technologies that show their geographical location and their physical state when they are out of the house. But older children themselves might well be inclined to resist or escape this type of surveillance.
Just as technology can be used as a kind of digital ‘leash’, so the home can become a kind of digital prison or panopticon. These devices raise challenging questions about what parents have a right to know about, and about children’s legitimate right to privacy.
The second form of surveillance is by commercial companies. Innovation in this field is dominated and driven by the large data companies like Google, Apple and Amazon. Essentially, smart home products enable them to extend the business model of social media to our everyday behaviour and interactions when we are off screen. These are technologies that allow commercial companies to gather information about what we’re doing and talking about, and to send it to remote servers, where in some instances humans may be listening in (as has been the case with Amazon’s Alexa). They can then sell that data to others without us knowing what is happening.
While we might be prepared to pay the price of convenience for ourselves, we may be less inclined when it comes to our children – and of course these devices are not going to distinguish. This issue is particularly acute given that many smart home devices depend upon voice recognition – or what some have called ‘digital wire-tapping’. Again, we should beware of paranoia; but, for example, do we want a situation where our children happen to mention something they’ve been doing during the day, or something a friend has just got for their birthday – only to find that they are then targeted with advertising for related products when they are playing a computer game or sharing pictures on Instagram?
The European GDPR (General Data Protection Regulation) does require companies to be transparent about this, and to take special precautions in dealing with data on children, but there is some concern that this isn’t happening when it comes to smart home products. There have been calls from the UK Information Commissioner’s Office for an ‘age-appropriate design code’ for this kind of technology – although of course there is room for debate about what this actually means, and how it might be implemented. At the very least, companies should make clear how personal data is being gathered, and what is being done with it; and they should make data collection opt-in rather than opt-out. This requires rather more than checking a box on a set of tiresome ‘terms and conditions’.
Technologizing family life
Underlying all this is a wider concern about the technologizing of family life – and in this context, technologizing is also about monetizing. With smart homes, it’s not just our public life, or even our friendships, that are mediated by technology, but our intimate family relationships as well. This is partly motivated by a growing sense of the home itself – and not just public space outside the home – as a site of risk. Both in the streets and in the bedroom, the child comes to be regarded as vulnerable, threatened and fragile – and technology seems to offer the means to ensure safety and exercise control. Yet as I’ve suggested, technology is also in some sense invading the home, and creating new forms of risk.
From a business point of view, companies may be in danger of misunderstanding their market here. The younger parents they are seeking to reach might be described as ‘digital natives’; but it would be false to assume that they necessarily want family life to be technologically efficient or technologically monitored twenty four hours a day.
It’s a vast generalisation, I admit, but I have a sense that the famed ‘millennials’ are actually becoming much more ambivalent about technology as they grow older. They may well be highly ‘tech savvy’, but they also display a distrust of technology, perhaps particularly when it comes to their own children. This is also the hipster generation, which is into vinyl records, cycling and organic artisan food – a generation that wants to reduce its carbon footprint, and one for which authenticity is a premium value. The technologizing of family life may not sit very easily with this, and it might well be something that will be resisted…