Scroll to top
Get In Touch
V.301 Vox Studios 1-45 Durham Street London SE11 5JH
Ph: +44 (0)20 3735 8580
Work Inquiries
Ph: +44 (0)20 3735 8580

GUEST CHAT with Dale Richards, Expert in Human Factors

In the third interview with an External Guest we meet with Dale Richards, who is a Senior Lecturer at Nottingham Trent University. He teaches Human Factors there and is also Research Group Lead of the Human Factors & Performance Research Group. Dale is part of several multidisciplinary research projects, providing Human Factors expertise across different domains. However, his main research focusses on how humans interact with complex and intelligent systems (ranging from automation, autonomy to AI).

In our Chat, Dale talks about his passion for encouraging the minds of the future to teach them that “you can achieve anything you want”, and the importance of engaging with engineers and demonstrating the value of human factors. Dale smashes the bad myths screamed at the general public about the terminator like AIs and appeals to the community to project more positive stories about the use of intelligent systems, so society can start adopting the latest cutting-edge technologies that already exist out there for good!

Tell us about the role of Human Factors and Intelligent Systems

If someone came to you today and said build me a display for a ground station that controls unmanned systems or intelligent systems, you may instantly run into the library to find a book or guidance documentation. It doesn’t exist because we are dealing with emerging technology that is so cutting-edge. If you say to me: “build me typical display” then fine: medical displays, aviation displays we know how to design these well. Underpinning theory behind these are well documented and good and bad design principles are pretty black and white. But when it comes to advanced technology like intelligent systems, autonomous agents or unmanned systems, that area gets very grey quite quickly. Often commercial companies would ask for something to read for guidance, but there are only some research papers and principles papers available, and no specific guidance. Things that exist are at more of an advisory level – take this into account, think about critical information, situation awareness. It’s not what commercial companies require. Quite rightly they are looking for far more prescriptive assistance – how does it look? How does the user interact with it? How can my users trust it? So, the type of work that we’re doing with Decision Lab is exactly along those lines. After working with Decision Lab for some time it is clear that you folk develop awesome AI across amazing pieces of kit; and at some point, there will be a need to display it to users. This is where I jump in with my experience of designing displays and focusing on how the user interacts with the system. I have been fortunate in terms of my experience across different disciplines, such as Cognitive Science, Human-Computer Interaction, Cognitive Engineering and Psychology, and that blending across disciplines cannot be simply found in a book. This is where Human Factors can really help, as it adopts a multi-disciplinary focus – design, interaction, training, breaking down tasks, etc. My area is on the sociotechnical and cognitive side – understanding what is in someone’s head: what information to give to them to enable them to achieve a goal they need to achieve.

Give us the secret of building a good display. What is the set of the golden rules?

Sometimes people say: “Well, it’s just about building a front end”. And this always makes me sigh because there is so much more to it, as any Human Factors specialist will tell you. You need to define the problem. I always reinforce this point with my students, to get them to define and understand the problem space before launching into possible solutions. It is from this point you can list both user and system requirements, so getting that bit wrong really sets you up to fail. Now the process gets even more complex and if a system is a bit smarter (not just an automated system), if it possesses autonomy or is deemed to be intelligent, then the system element becomes as complex  as the human – if not more. Apart from being able to elicit requirements from human users, it’s not exactly the same process to sit down with an AI system and have the same conversation. At this point it comes down to designing harmony between the human and the system, where both elements understand each other and the goal they share becomes aligned via joint tasks. If you neglect to address this aspect, then you could likely end up with an AI system that is fighting the user or you may be confronted with a human that simply disengages from using the AI.

It is remarkable how AI and human requirements overlap sometimes. As we interrogate the human for requirements, and AI system may equally display complacency and inherent bias. This is why we need to think about establishing trust between the human and the system.

Talking about the buzz words in our area: trust and transparency…

Too often I’ve seen engineers thinking if I make a system more transparent, will the users trust it more? By providing more and more information to the user it does not make this system necessarily better. We don’t want this extra information to overload or to distract the user. Far too often I have seen users get really engrossed as to why an AI agent behaves in a certain way. Watching it is compelling and we strive to understand the rationale behind the behaviour – just like if we were watching another human displaying behaviour in a set context. For example, I am sure we can all remember driving and at some point seeing the car ahead of us act in a manner that made us process it a little bit more. And maybe alter our behaviour as a result? If we saw a beer can thrown out the window, or glimpsed a mobile phone being used, we have tested our theory of intent. These cues (sometimes subtle ones) can be used in a similar way with building this relationship between the human and intelligent system. Introducing appropriate levels of transparency and explainability in order to allow the human to test these hypotheses of behaviour and intent, that will eventually define trust.

Saying all that, I cannot stress the importance of working alongside the AI designers and building this design philosophy together. So my role is very much geared towards working together with the engineers and end users. You’ve got to build this collaborative design environment around you. If you work in isolation, you can develop an awesome solution, but then a user presses a green button to go and things start happening that you don’t want to happen. It’s got to be a collaborative effort.

Talking about the trends

The focus seems always to be on technology, and we all know how important this is in terms of advancing society and addressing real global challenges. However, where I think our community is falling behind is in public engagement. Public opinion is really important. We can talk a lot about how autonomous cars will drive along the motorways until we start gauging what the public perception is, and what they’re going to accept. What’s the uptake of this technology going to be like with the public? What’s the point of having a drone delivering your shopping, using an array of sophisticated sensors, in order to avoid collisions, navigate autonomously and then safely land in your back garden, but how exactly is the customer, or simply the neighbour whose house is ‘buzzed’ every Thursday at 8pm going to feel? How are we going to engage with this technology as individuals, and in wider society? As an industry we forget it’s pointless to be building all these fantastic systems unless the public is going to use them, and accept them. And this partially comes down to the language we use, how we issue stories to the media.

I did a study where we looked at the media stories for drones. And they were mainly negative stories apart from when they were about disaster relief or police catching someone or finding a missing person. And now there are big initiatives in the drone community, like “drones for good” to get more positive stories. The same thing needs to happen in AI.

So what kind of messages should the community be communicating about AI?

When you talk about AI to the general public, they think about Terminator, where, in fact, we’re nowhere near a generalised AI, where we can have something anywhere near a Terminator-like robot. We have narrow AIs, so they can focus on certain tasks that AI can help us with. But think about a human – we possess this level of general intelligence and all these encompassing skills that we have evolved to apply when required. AI is nowhere near that and we are a considerable distance away from having systems with this sort of capability. Some of the leading websites scream about robots taking over the people’s jobs and some of it is true. But it’s all about how we adapt and focus.  It should be more about how AI helps us – makes things safer, easier, more efficient, and can save lives. As a community we should be putting positive stories out there about how we apply technology for good. What needs to be out there are the stories about AI processing X-rays where it takes minutes for AI to do a task that would take weeks for a human.

What is the role of the human element with intelligent systems?

I haven’t seen a fully autonomous system yet. Although I am sure some will disagree. There is always a human involved somewhere, authorising or monitoring progress.

The human element in these systems is critical. More so when we think of the high ethical stance that we take as a nation and the ethos that runs through the fabric of our defence forces. When you work within defence projects you get to see the importance of making critical decisions. I am often in awe at the professionalism and layers that are embedded when important decisions have to be made.  It is incredibly well orchestrated. I do sometimes find it frustrating that the image that can often get portrayed of these decisions are rather glib: you see someone sat somewhere in the comfort of their own home as they causally press a button as a robot goes and does harm. These are so far removed from the reality. These aren’t decisions that are taken lightly, and I’ve been privileged to witness this in training exercises and I’ve been overwhelmed with pride. There are very strong rules of engagement attached to these critical, and sometimes hard decisions.

What companies or developments out there that excite you?

I am very much domain agnostic. I don’t have a pet area that I like working in. It’s the same research – how does a human interact with an intelligent system?

Most of the cutting-edge research comes from defence. Aviation has traditionally been an incubator for these sorts of technologies, but we can see advances across autonomous cars and maritime is rapidly seizing opportunities. The Health sector is quite exciting also, and we have seen advances in diagnostics and prognostics, and not forgetting surgical robotics. I am reminded of University of Cincinnati and some really interesting research using fuzzy logic with prognostics for medical intervention for the people with mental health issues.

Tell us about what drives you?

I have on occasion done some STEM talks, standing in front of students who are taking their A-levels and talk about Human Factors and Engineering. I try to reinforce the message that you can achieve whatever you want to achieve! I often give myself as an example of how not to go about your career, because I left school with just 1 GCSE. After I realised I needed to do something. I did some Open University credits, and then went to college and completed an Access Course. I am so thankful to the Admissions Dean at Swansea University, and I fondly recall the interview and discussion I had with him – trying to convince him to take a chance on me. I started out on a joint honours degree in Psychology and Philosophy. I actually did better in Philosophy, but decided that Psychology was what I wanted to do as a career. In my final year I really enjoyed neuropsychology and cognitive psychology. When I got my degree I went back to the Admissions Dean and thanked him for the opportunity. After my PhD offer at Loughborough fell through due to funding changes, my first year tutor at Swansea encouraged me to apply for a University of Wales Scholarship. That tutor became my PhD supervisor and taught me things that I often found myself repeating to my own PhD students. During my PhD at Swansea I really enjoyed teaching. To have students engage, to ask questions at the end of the lecture or come up to you randomly to discuss their thoughts about something. Following my PhD I joined QinetiQ, but when I returned to Swansea, a few years later, I made a point to visit the Admissions Dean for a final time just to let him know that the chap who left school with one GCSE and had a handful of qualifications was now a Dr. A decade in industry, primarily working on autonomous systems, winning several awards, and being promoted to lead the Human Factors effort across several great projects still makes me feel lucky. My return to Academia has allowed me to give something back to Human Factors and teaching it to Engineering students in particular. Over the last several years I have been fortunate to have worked with some great colleagues, and privileged to have been part of some student’s journey.

What’s next?

Well, the passion for industry has never left me and I have often wrestled with an internal struggle between academia and industry. AI is not going away anytime soon, and the need for Human Factors is only going to grow, so my next chapter will see me joining Frazer-Nash Consultancy in order to work with some amazing folk on exciting projects. I will still be keeping touch with academia and ensuring the mantra between HF and intelligent systems is heard, and perhaps the importance of industry and academia to merge more going forward.

Author avatar
Natasha Zheltovskaya

Post a comment

We use cookies to give you the best experience.