Suppose that tomorrow you decide to go into the self-driving car business. You’ll need a car. You’ll need to attach cameras and sensors to your car so that it can “see” what’s around it. And you’ll need software to drive it. The software will need to interpret the images, video, and other data captured by the cameras and sensors and decide when and how fast to accelerate, brake, and turn. To do this it will use “machine learning” — you’ll “train” it by “showing” it lots of road situations and “telling” it what you would do in those situations. In real life, the software will encounter situations that are different from those it’s been “trained” with, but they’ll be similar enough that it will be able to make reasonable — and safe — decisions.
You’ll also “train” the various pieces of the system. The software can’t know, without being “trained,” what a car looks like, or a bicycle, or a pedestrian, or a truck. (The infamous first fatal Tesla autopilot accident, for example, reportedly involved the system “mistaking” a white truck for empty sky.) It needs to be supplied with huge quantities of images — photographs and video captures of street scenes — that have been “segmented” or “annotated.” That is, the parts of the image containing cars need to be labeled “car,” the parts containing trucks need to be labeled “truck,” the parts containing bicycles need to be labeled “bicycle,” the parts containing pedestrians needs to be labeled “pedestrian,” and so on. In the self-driving car business, and the “machine learning” business more generally, this is called “training data.”
For self-driving cars to be safe, these labels must be extremely precise; a mistake can mean death. For the most part, the required precision cannot yet be delivered by automated means. To bridge the gap, makers of self-driving cars turn to specialist vendors such as companies like Seattle based Mighty AI to have their raw data annotated and turned into usable training data. But Mighty AI doesn’t do this by having huge offices, crammed with workers. Without making too much of a fuss about it, many such companies turn to a “crowd” of freelancers all over the world, many working from home.
The self-driving car industry is not alone in pursuing this “human powered” approach to developing services understood as “artificial intelligence.” It can also be found in speech recognition algorithms in customer support systems; “automated” transcription systems; and translation services. In each case, human workers are managed through, or by, a “platform”.
The idea of using self-employed human labor managed by software to fill in where the current state of artificial intelligence falls short of the desired result may have first — or at least most famously — been promoted by Amazon, with the 2005 launch of “Mechanical Turk,” a “platform” for information work. That the engineers who built Mechanical Turk fully understood the implications of their work is indicated by the system’s original slogan: “artificial artificial intelligence.”
What’s the difference between “artificial artificial intelligence” and “real” — that is, human intelligence? Well, customers of “artificial artificial intelligence” are not seen as employers or managers; they are seen as users who access a service through a website – as if it were a purely computational system rather than an interface to thousands of human workers. There is no hiring or management in the traditional sense. Finally, and crucially, workers recruited via digital platforms have no rights beyond those specified in the platform’s terms and conditions — which the platform operating company writes itself.
Lukas Biewald, founder of CrowdFlower (now Figure Eight), may have put it more clearly than anyone when he said: “Before the Internet, it would be really difficult to find someone, sit them down for ten minutes and get them to work for you, and then fire them after those ten minutes. But with technology, you can actually find them, pay them the tiny amount of money, and then get rid of them when you don’t need them anymore.” You’ve got human workers, but you can treat them as if they were a computer. It seems Silicon Valley has finally come up with an answer to Henry Ford’s complaint: “Why is it every time I ask for a pair of hands, they come with a brain attached?”
The production of “training data” for machine learning — or, as it has come recently to be called, “artificial intelligence” — may be the most recent and technologically sophisticated application form of utilising a crowd of workers through a platform, but it is far from the only application. Indeed, any work in which the product or service can be delivered electronically or performed remotely can be outsourced via a platform. Google and other search engines outsource the refinement of their search results to human workers — sometimes freelancers without employment contracts, rights, or benefits — via platforms. Social media platforms outsource a great deal of content moderation work — that is, checking material posted by users for pornography, violence, and other terms-of-service-violating content, sometimes via platforms. Private clients and huge organizations recruit freelancers via platforms such as Upwork and Freelancer.com to do work from data entry — for as little as a few US dollars per hour — to accounting, legal research, and project management — for over a hundred US dollars per hour.
For traditionally really highly paid work, clients can turn to “contest-based” platforms like 99designs (for graphic design), TopCoder (programming), Local Motors (engineering design), and InnoCentive (scientific research), where they can post a project brief and invite hundreds of workers to submit work — but pay only one or a few winners whose work they select for use. And social and psychological scientists increasingly rely on platform workers as human subjects.
How do we assess this emerging landscape of labor relations? It’s not hard to criticize platform clients and operators as exploitative — or least as beneficiaries of previously existing unjust inequalities. In some cases, clients in rich countries pay workers in rich countries less than their national minimum wage[iv]. In other cases, clients in rich countries pay workers in poorer countries at rates that few workers in their home countries would accept but that workers in poorer countries — or countries in economic crisis — find adequate, or even excellent. These clients and platforms benefit from differences in costs of living between rich and poor countries — differences that result partly from centuries of colonialism and other brutal and exploitative histories. And some online platforms have effectively created global labor markets, putting clients in the position of, perhaps unwittingly, benefiting from a global “race to the bottom” in terms of wages as workers compete for work.
Yet however inequitable the situation of online labour platform workers appears, it seems often to be yet another case in which, as the economist Joan Robinson wrote, “The misery of being exploited by capitalists is nothing compared to the misery of not being exploited at all.” If all labor platforms were shut down tomorrow, many workers would be at least as negatively affected as the clients.
This is not to make excuse, for example, for the one-sided legal terms, unlivable wages, or arbitrary nonpayment and “account deactivation” — platform-speak for dismissal — that one easily finds in the world of online labour platforms. The point is simply that if your job sucks, usually the thing to do is not to blow up your workplace but to fight to improve your working conditions. And labor platforms are exactly that: a workplace. Blanket criticisms of labor platforms as such do not help much. Rather, we need detailed understandings of the particular conditions of work in particular platforms, sectors, and jurisdictions, and strategies to address concrete problems and risks faced by particular groups of workers. Ultimately, we need to establish workers’ rights — through collective agreements, and, as appropriate, national and international policy.
In the last decade a patchwork quilt of strategies and initiatives for improving platform workers’ material conditions and building collective power have emerged at a variety of levels. It would be hard to argue that any of these efforts has yet won a decisive victory in terms of bringing the working conditions and rights of online platform workers into line with those of “traditional” employees. Perhaps, however, they have laid the foundations on which those victories will be built.
Battles for survival: climate crisis and far right rising ● Europe’s creeping fascism ● The far right in Britain ● New anti-racist movements ● The climate uprising ● Green New Deal debate ● Lowkey interview ● Anti-fascist music ● Book reviews ● and much more
And you choose how much to pay for your subscription...
On both the left and the right, people pit migrants' rights against workers' rights. That attitude only serves the interests of the powerful, writes Amardeep Dhillon.
The bakers’ union president Ian Hodson spoke to Red Pepper about the new forms of organising that have enabled the union, founded in 1847, to begin to grow again.
A fast-growing grassroots union is shaking up the way trade unions organise among the lowest paid and most marginalised workers. Shiri Shalmy reports
The student population today is unrecognisable from that of a generation or more ago, writes Matt Myers. And it is central to any socialist project for the future.
With the right organising and the right plan, UCU workers can transform universities from within. By David Ridley
Nancy Platts writes that the workers' movement needs to challenge unaccountable power.