Designing Systems for Humans: The Essential Role of Human Factors in Modern Technology
- Georgia Hodkinson GMBPsS

- Jan 19
- 5 min read

Georgia Hodkinson, GMBPsS is an Occupational Psychologist and Human Factors Specialist, currently completing Stage 2 Chartership with the British Psychological Society. She runs Georgia’s PsyWork Ltd, delivering evidence-based work across human factors, fatigue, leadership, and performance, and is Director of Operations and Marketing at the Psychology Business Incubator (PBI), where she supports applied psychology in practice within a collaborative network. Her work is grounded in designing systems that support real human capabilities, limits, and wellbeing.
Designing with Humans in Mind: Why Human Factors Must Lead the Way
In an increasingly digitised and automated world, it’s tempting to assume that the human element in systems design is becoming less critical. In truth, the opposite is happening. As systems grow in complexity and automation becomes more deeply embedded in our daily lives, understanding how humans interact with these systems becomes essential.
As an occupational psychologist and human factors specialist, I’ve spent years studying how environments, tools, and organisational practices can support or undermine human performance. The field of human factors, sometimes referred to as ergonomics, sits at the intersection of psychology, design, engineering, and ethics. It’s about designing systems that respect human capabilities, limitations, and needs.
Rethinking “Human Error”
A common misinterpretation in both public and professional spaces is the concept of human error. It’s often used as a convenient explanation when things go wrong, be it a surgical mistake, a transport accident, or a cybersecurity breach. However, human factors research consistently shows that what we label as “error” is typically the result of system design mismatches, latent conditions, or organisational pressures.
The Swiss Cheese Model (Reason, Hollnagel & Pariès, 2006) remains foundational in understanding how accidents occur through a series of systemic vulnerabilities. Blaming the end user obscures the opportunity for learning and improvement. Human error should be the starting point of investigation.
Human-Centred Design
Human-centred design (HCD) is central to human factors. It means designing with the user. It involves understanding how people process information, make decisions under pressure, and how fatigue, stress, or environmental factors shape performance.
In healthcare, Carayon et al. (2014) found that applying human factors principles to intensive care units led to measurable improvements in communication, patient safety, and clinician satisfaction. Similarly, Boy (2017) emphasises that complex, safety-critical systems, from aviation to energy, must integrate human-centred design from the ground up. The goal is to enable people to perform better.
Ethics and Responsibility in Design
We have a responsibility to advocate for systems that protect, respect, and empower people. This is particularly critical in high-stakes industries such as healthcare, transport, and defence, where design decisions can literally save or cost lives.
Ethical responsibility also extends to everyday technologies. With the rise of AI, data collection, and algorithmic management, we must ask: Can we build it? should be followed by Should we build it? and Who might be harmed by this?
Recent work on human-centred artificial intelligence (Shneiderman, 2020) argues for “supervised autonomy”, designing AI that augments rather than replaces human decision-making. Likewise, Visciòla (2025) calls for a “sustainable human-centred design” that embeds fairness, transparency, and ecological responsibility into the development of digital systems.
Human factors professionals must therefore act as translators, between engineers, designers, end users, and executives and as ethical stewards, ensuring that human wellbeing is prioritised alongside performance metrics.
The Promise and Peril of Automation
Automation is transforming the way humans interact with technology, but it brings both promise and peril. While it can reduce workload and improve efficiency, it can also lead to automation surprise, when systems act unpredictably, and the out of the loop performance problem, where humans lose situational awareness during extended periods of inactivity (Endsley & Kiris, 1995).
Trust plays a central role here. As Perkins et al. (2010) found, human trust in automation varies with perceived risk, too little trust causes underuse, while over trust leads to complacency. Designing for appropriate trust calibration is now seen as one of the defining challenges in automation.
In manufacturing, this dynamic is being tested in human–robot collaboration. Boschetti et al. (2022) developed a human-centred design method that balances productivity and safety in robotic workspaces. Their findings echo a broader truth: automation must be designed around human strengths and limitations.
Psychological Safety and Organisational Culture
Human factors extends far beyond physical or interface design. Organisational culture, particularly psychological safety, is critical to maintaining performance in complex systems. Amy Edmondson (1999) demonstrated that teams who feel safe to speak up about errors or system flaws learn faster and perform better. When staff are punished for raising concerns, organisations lose vital feedback loops that drive improvement.
Recent work by Lyon et al. (2020) integrates human-centred design principles into organisational psychology, showing how participatory, empathetic design processes foster trust and engagement across teams. When leadership supports open communication and reflective practice, human factors principles can transform entire cultures.
Limitations of Human Factors and Why They Matter
While human factors offers a powerful framework, human behaviour is variable and context-dependent; what works in one setting may fail in another. In emerging fields like AI and digital health, longitudinal data on human–system interaction remains limited. And even when robust design recommendations exist, organisational resistance, time pressures, or resource constraints can block implementation.
Recognising these limitations doesn’t weaken the field, it strengthens it. It reminds us to approach design with humility, evidence, and a commitment to continuous learning.
Looking Ahead: Integration
As we look to the future, the need for integrated, multidisciplinary thinking has never been greater. Whether we’re designing workplaces, deploying AI, or tackling societal challenges like climate change and ageing populations, human factors must be part of the conversation from the outset.
Incorporating human factors is about ensuring innovation serves people effectively, ethically, and sustainably. As Margetis et al. (2021) argue, moving from one-dimensional automation to human-centred autonomy will define the next decade of design.
Conclusion
Designing systems with humans in mind is a moral and professional imperative. As an occupational psychologist and human factors specialist, I must continue advocating for approaches that see people as sources of insight, adaptability, and value. If we design for real people we can build systems that are safer, more efficient and more humane.
References
Boschetti, G., Faccio, M., & Granata, I. (2022). Human-centered design for productivity and safety in collaborative robots cells: A new methodological approach. Electronics, 12(1), 167.
Boy, G. A. (2017). Human-centered design of complex systems: An experience-based approach. Design Science, 3, e8.
Carayon, P., Wetterneck, T. B., Rivera-Rodriguez, A. J., Hundt, A. S., Hoonakker, P., Holden, R., & Gurses, A. P. (2014). Human factors systems approach to healthcare quality and patient safety. Applied ergonomics, 45(1), 14-25.
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative science quarterly, 44(2), 350-383.
Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human factors, 37(2), 381-394.
Lyon, A. R., Brewer, S. K., & Areán, P. A. (2020). Leveraging human-centered design to implement modern psychological science: Return on an early investment. American Psychologist, 75(8), 1067.
Margetis, G., Ntoa, S., Antona, M., & Stephanidis, C. (2021). Human‐centered design of artificial intelligence. Handbook of human factors and ergonomics, 1085-1106.
Perkins, L., Miller, J. E., Hashemi, A., & Burns, G. (2010, September). Designing for human-centered systems: Situational risk as a factor of trust in automation. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 54, No. 25, pp. 2130-2134). Sage CA: Los Angeles, CA: SAGE Publications.
Reason, J., Hollnagel, E., & Pariès, J. (2006). Revisiting the Swiss cheese model of accidents. Journal of Clinical Engineering, 27(4), 110-115.
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504.
Visciòla, M. (2025). Rethinking Human-Centered Design: From Automation to Sustainable Innovation. Interactions, 32(4), 22-27.





Comments