top of page

Designing Systems for Humans: The Essential Role of Human Factors in Modern Technology


A silhouetted figure interacts with a digital network interface, featuring glowing nodes and circuits in blue and pink hues on a dark background.

Georgia Hodkinson, GMBPsS is an Occupational Psychologist and Human Factors Specialist, currently completing Stage 2 Chartership with the British Psychological Society. She runs Georgia’s PsyWork Ltd, delivering evidence-based work across human factors, fatigue, leadership, and performance, and is Director of Operations and Marketing at the Psychology Business Incubator (PBI), where she supports applied psychology in practice within a collaborative network. Her work is grounded in designing systems that support real human capabilities, limits, and wellbeing.


Designing with Humans in Mind: Why Human Factors Must Lead the Way


In an increasingly digitised and automated world, it’s tempting to assume that the human element in systems design is becoming less critical. In truth, the opposite is happening. As systems grow in complexity and automation becomes more deeply embedded in our daily lives, understanding how humans interact with these systems becomes essential.


As an occupational psychologist and human factors specialist, I’ve spent years studying how environments, tools, and organisational practices can support or undermine human performance. The field of human factors, sometimes referred to as ergonomics, sits at the intersection of psychology, design, engineering, and ethics. It’s about designing systems that respect human capabilities, limitations, and needs.


Rethinking “Human Error”


A common misinterpretation in both public and professional spaces is the concept of human error. It’s often used as a convenient explanation when things go wrong, be it a surgical mistake, a transport accident, or a cybersecurity breach. However, human factors research consistently shows that what we label as “error” is typically the result of system design mismatches, latent conditions, or organisational pressures.


The Swiss Cheese Model (Reason, Hollnagel & Pariès, 2006) remains foundational in understanding how accidents occur through a series of systemic vulnerabilities. Blaming the end user obscures the opportunity for learning and improvement. Human error should be the starting point of investigation.


Human-Centred Design


Human-centred design (HCD) is central to human factors. It means designing with the user. It involves understanding how people process information, make decisions under pressure, and how fatigue, stress, or environmental factors shape performance.


In healthcare, Carayon et al. (2014) found that applying human factors principles to intensive care units led to measurable improvements in communication, patient safety, and clinician satisfaction. Similarly, Boy (2017) emphasises that complex, safety-critical systems, from aviation to energy, must integrate human-centred design from the ground up. The goal is to enable people to perform better.


Ethics and Responsibility in Design


We have a responsibility to advocate for systems that protect, respect, and empower people. This is particularly critical in high-stakes industries such as healthcare, transport, and defence, where design decisions can literally save or cost lives.


Ethical responsibility also extends to everyday technologies. With the rise of AI, data collection, and algorithmic management, we must ask: Can we build it? should be followed by Should we build it? and Who might be harmed by this?


Recent work on human-centred artificial intelligence (Shneiderman, 2020) argues for “supervised autonomy”, designing AI that augments rather than replaces human decision-making. Likewise, Visciòla (2025) calls for a “sustainable human-centred design” that embeds fairness, transparency, and ecological responsibility into the development of digital systems.


Human factors professionals must therefore act as translators, between engineers, designers, end users, and executives and as ethical stewards, ensuring that human wellbeing is prioritised alongside performance metrics.


The Promise and Peril of Automation


Automation is transforming the way humans interact with technology, but it brings both promise and peril. While it can reduce workload and improve efficiency, it can also lead to automation surprise, when systems act unpredictably, and the out of the loop performance problem, where humans lose situational awareness during extended periods of inactivity (Endsley & Kiris, 1995).


Trust plays a central role here. As Perkins et al. (2010) found, human trust in automation varies with perceived risk, too little trust causes underuse, while over trust leads to complacency. Designing for appropriate trust calibration is now seen as one of the defining challenges in automation.


In manufacturing, this dynamic is being tested in human–robot collaboration. Boschetti et al. (2022) developed a human-centred design method that balances productivity and safety in robotic workspaces. Their findings echo a broader truth: automation must be designed around human strengths and limitations.


Psychological Safety and Organisational Culture


Human factors extends far beyond physical or interface design. Organisational culture, particularly psychological safety, is critical to maintaining performance in complex systems. Amy Edmondson (1999) demonstrated that teams who feel safe to speak up about errors or system flaws learn faster and perform better. When staff are punished for raising concerns, organisations lose vital feedback loops that drive improvement.


Recent work by Lyon et al. (2020) integrates human-centred design principles into organisational psychology, showing how participatory, empathetic design processes foster trust and engagement across teams. When leadership supports open communication and reflective practice, human factors principles can transform entire cultures.


Limitations of Human Factors and Why They Matter


While human factors offers a powerful framework, human behaviour is variable and context-dependent; what works in one setting may fail in another. In emerging fields like AI and digital health, longitudinal data on human–system interaction remains limited. And even when robust design recommendations exist, organisational resistance, time pressures, or resource constraints can block implementation.


Recognising these limitations doesn’t weaken the field, it strengthens it. It reminds us to approach design with humility, evidence, and a commitment to continuous learning.


Looking Ahead: Integration


As we look to the future, the need for integrated, multidisciplinary thinking has never been greater. Whether we’re designing workplaces, deploying AI, or tackling societal challenges like climate change and ageing populations, human factors must be part of the conversation from the outset.


Incorporating human factors is about ensuring innovation serves people effectively, ethically, and sustainably. As Margetis et al. (2021) argue, moving from one-dimensional automation to human-centred autonomy will define the next decade of design.


Conclusion


Designing systems with humans in mind is a moral and professional imperative. As an occupational psychologist and human factors specialist, I must continue advocating for approaches that see people as sources of insight, adaptability, and value. If we design for real people we can build systems that are safer, more efficient and more humane.


References




Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Mobile: +44 7507627754

logo_edited_edited.png

Leicester,

United Kingdom

  • Instagram
  • LinkedIn
  • X
  • TikTok

 

© 2025 Georgias PsyWork Ltd.

 

bottom of page