Essays explore the hellscape of freelance AI model training

Ever wondered what it's like to train AI models? Sounds cutting-edge and cool, maybe? Seems like something that might be interesting and where you might learn some helpful new skills, right? Well, according to some people who have recently done this work for one of the biggest AI companies in the world, the work of training AI is chaotic and inconsistent at best. And, according to Cathy Glenn in a new piece about her work training at Outlier, which is part of Scale-AI, AI model trainers are subjected to "predatory labor practices" that "create authoritarian cultural conditions for workers, not just abroad, but also here in the U.S."

In that piece Glenn describes her very recent work at Outlier, which sounds frustrating, to say the very least. Here are some excerpts from her eye-opening piece (which you should absolutely read in full, here):

Over two months, I was moved 18 times to different projects. . . Extensive training and four evaluation tasks were necessary for me to be allowed to work on the Ostrich project for Open-AI. Before starting my first two tasks, the only training was reading the convoluted instructions. Everyone working toward the project was promised feedback on their first and second tasks so that we could adjust and improve our performance on the following two tasks. No evaluation criteria were offered, and the promised reviews were not accessible. 

After the Ostrich team admitted to losing the first two tasks from workers who completed them – each task takes up to 6 hours to complete – anxiety, fear, frustration, and chaos ensued on the Slack channels. No reviews of work – or rushed, hostile reviews that made no sense – were the norm for hundreds working toward admission to OpenAI's Ostrich. 

Last summer, Josh Dzieza wrote a great piece for The Verge highlighting his experiences working for the same company, alongside the experiences of AI trainers in Kenya he interviewed. Here are some excerpts from his piece (and here's the whole thing, definitely also worth a read):

According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.

That is, when they were making any money at all. The most common complaint about Remotasks work is its variability; it's steady enough to be a full-time job for long stretches but too unpredictable to rely on. Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end. There might be nothing new for days, then, without warning, a totally different task appears and could last anywhere from a few hours to weeks. Any task could be their last, and they never know when the next one will come.

Finally, earlier this week a Reddit user who has also been tasking on Outlier wrote up their experiences as a sort of overview/warning to people who are new or are considering signing up as freelancers. It's also worth a read, as it outlines some truly astounding(ly bad) company practices. Here's an excerpt (and the link to the Reddit thread):

There are hundreds and in some cases thousands of people across the world being brought in at any given time. The first Slack group I was put in had 965 people. They hire in mass. You aren't special, even if they hired you at the Tier 3 $40/hr level. I was brought in at that level with, if I'm right, about 300 other people on the same day. Because of that volume, your individual questions in Slack will rarely be answered until you happen to get put on a team with a Team Leader (TL). I've seen people be put in the general onboarding Slack channel and plead and beg for someone to respond to them for sometimes weeks at a time. I'm impressed they kept trying. Fact is, the volume is such that people fall through the cracks . . . 

You will be assigned to and removed from projects without warning. 

You will be placed in and pulled out of Slack channels without warning.

You will "train" (sometimes without being paid) for projects that you will never have a chance to work in. (Normally this happens because you'll be placed on a different project just after or during the period you're reading the training materials.)

Training materials and procedures will change without warning or notification . . . 

You will be told you will receive feedback on tasks or training tasks, but it never happens. (Sometimes you will receive feedback that makes little sense or seems contradictory to the training. This is because the "taskers" are often "reviewers" as well, and the quality of the reviews and feedback depends on the person who happens to be reviewing your work. 

Sometimes they will be reviewing you on outdated versions or understanding of the training. 

Sometimes this means that you will have your pay Tier lowered or even be let go unfairly. There is, to my knowledge, no reliable way to appeal this. Some people have stories about doing it, but when you try to repeat their steps, the system or platform may have changed.)

"Team Leaders" or other supervisors will disappear, be furloughed, be turned into regular taskers like you, without notice or explanation. (They also, as a rule, don't know what's going on. They have usually only been there for a month or two before you and are obviously working on instructions given them immediately prior to passing along information, so they're not really "part of the company," either.)

Yikes! For all the talk of the futuristic utopic promise of AI, the experiences recounted above sure are giving off decidedly oppressive and dystopic vibes.

Previously: Photobucket archives may sell for billions to train AI