Humans in the loop: How ‘human data’ is shaping the next generation of AI

In this guest article, Sara Saab, VP of Product for Prolific, discusses artificial intelligence and how the need for human involvement is critical for the continued support and trust in AI.
Across the globe, in every boardroom, in every analyst newsletter – and around the watercooler – we’re wondering where artificial intelligence (AI) will take us next. Its power is increasingly obvious, and its applications are becoming clearer. It’s no longer a matter of whether AI has the potential to usher in a new golden age of scientific discovery and human intellect. It’s when.
At its best, this new technology will transform the way companies, industries and societies operate almost instantaneously. Take healthcare. We’ve seen algorithms harness AI to analyse the movement of eyes and highlight abnormalities that may need medical attention. We’ve seen an AI system offer predictions of the protein folding structures of the human body with never-seen-before accuracy, unlocking one of the great unsolved mysteries of our biology.
But, there’s a sobering side to this technological moment. Without careful training and modelling, AI can – and will – be felled by forces of bias and misalignment. It will be untrustworthy, and therefore render our interactions with each other untrustworthy. It may cause harm. Already, AIs have reflected racial prejudices in healthcare and displayed sexism in automated recruitment systems. We’re not going to stamp that vulnerability out immediately. We will all have to work hard to stamp it out at all.
Humans in the Loop, Humans at the Heart
Our quest for good AIs must, in the end, come back to our quest for good and fair systems for humans working with humans.
To shape the next generation of AI, we’re going to need specific types of distinctly human involvement. A process called ‘human-in-the-loop’ (HITL) training, of which Reinforcement Learning by Human Feedback (RLHF) is the current exemplar, is one such type of human involvement. It works by enabling humans to provide feedback on information generated by an AI system. The human annotator will rank or rate examples of the AI’s work, helping to correct weaknesses, biases and gaps in what the AI has learned to do, when compared to the goals set for it.
For HITL training to work well, choosing the right humans for the right tasks is just as important as the training method itself. The people who take part in this work must be attentive, trustworthy, and engaged – and more than that, they must know more about the domain at hand than their AI counterpart does. And finally, the human teachers of this new technology must be fairly compensated, well-treated, active and willing participants in the mission of harnessing this incredible technology for good.
All of the current research points to the exact same conclusion: who trains your AI matters.
I predict that training state-of-the-art AIs will always rely on HITL methods like RLHF. Over time, turning to human annotators for ‘ground truths’ will enable machines to learn from our human experiences and make better, bias-free decisions in the real world.
These HITL methods themselves are showing improvement and efficiency gains over time, allowing for more nuanced, empathic learning. Again, take healthcare as an example: AI models in healthcare must be able to provide information that is both accurate and ethical. By learning from diverse human teachers, these models will increasingly make choices that reflect the wide range of values and priorities we find in the diverse underlying public health populations. In the case of one of our clients, who asked us to train their mental health chatbots, this meant an AI system that was polite, reliable, and – critically – safe. A number of English-fluent people with diverse backgrounds trained this AI to understand what this looked like for different people around the world.
Suffice it to say, there’s no single lived human experience, and no AI should be trained, whether by oversight or in order to reduce costs, as if there is.
Challenges facing RLHF
Training AIs is a complex undertaking, so it’s no wonder that HITL methods like RLHF also come with their challenges. One of the most interesting – if least technical – challenges is finding ways to incentivise high-quality human feedback.
RLHF, even more so than other AI data annotation work, hinges on deeply considered feedback, and takes real time and effort. RLHF tasks themselves are rarely clear-cut – for a sense of this, next time you do an online reCAPTCHA challenge, try ranking the sailboats, bridges, or motorcycles from best to worst! Hard as the work is, I believe it is necessary if we hope to build fairer, better AIs. Shortcuts – such as AIs training other AIs, and utilising prompt engineering combined with large prompt context lengths – will never be good enough for the vast set of real-world applications where it’s imperative that AIs get things right, providing ethical, factual, humanistic information 100% of the time.
The best annotation platforms aim to resolve the challenge of incentivising great RLHF work by paying real-life annotators fairly for their time, vetting for required domain knowledge, and selecting annotators for diversity. These platforms also give annotators control. No data is collected without their consent, they have complete autonomy over what AI training tasks they take part in, and they have tools allowing them to flag or opt-out of tasks they find offensive or disturbing. Bank-grade ID verification protects these platforms from infiltration by bots or bad human actors.
Human-in-the-loop AI training methods like RLHF deserve pride of place in our toolkit for developing the next generation of AIs. After all, we’re already seeing what happens when humans are cut out of this process. Media organisations and privacy advocates continue to scrutinise the low-quality, stolen, or laundered personal data inputted via vast datasets into AIs during training, while AI developers fear that generic data from the web cannot push the performance of their state-of-the-art AI models any further.
Above all, regulators are proposing legislation – such as the EU’s AI Act – to strengthen rules around data quality, transparency, human oversight and accountability. Simply put, the bar for AI training data is rising higher and higher.
Only the best human feedback will be able to meet that bar.
