-
PDF
- Split View
-
Views
-
Cite
Cite
Y Samant, R Naz Malik, App jobs, algorithms, and risks: Hidden hazards of platform work, Occupational Medicine, Volume 75, Issue 1, January 2025, Pages 6–8, https://doi.org/10.1093/occmed/kqae131
- Share Icon Share
Algorithmic management is increasingly common in the platform economy, affecting work distribution, performance monitoring and pay structures.
Platform workers face health risks such as musculoskeletal disorders, stress and traffic injuries.
The platform economy creates precarious employment situations, often without the social protections enjoyed by traditional employees.
This commentary highlights the impact of constant surveillance and algorithmic decision-making on psychosocial and physical health in platform workers.
It suggests that transparent algorithms, enhanced worker protections and specific occupational safety measures are crucial for risk mitigation.
The need for responsible artificial intelligence (AI) practices in platform work is emphasized, linking worker health to the design of algorithmic systems.
Occupational health services could develop AI-driven risk assessment tools tailored for platform workers.
Policymakers should reform labour laws to ensure gig workers are afforded basic rights, safety and health protections.
Greater accountability and transparency in algorithmic decision-making can lead to decent, safe and healthy work conditions for platform workers.
The platform economy refers to a business model where digital platforms—such as mobile apps or websites—connect workers with customers for services like food delivery, ride-hailing and freelance tasks. These platforms act as intermediaries, coordinating the work and payment through algorithms, while workers typically operate as independent contractors.
The rapid growth of the platform economy, particularly in food delivery and ride-hailing services, has revolutionized the way the world works and consumes. These tasks, often referred to as ‘app jobs’, are accessible through mobile applications to anyone seeking flexible employment. However, beneath the surface of convenience lies a complex web of algorithmic systems that pose significant risks to the health and safety of platform workers [1]. The management of these workers through algorithms—digital systems that control nearly every aspect of their tasks, from work allocation to payment and performance evaluation—has transformed the very nature of work. While these algorithms promise efficiency and objectivity, they often overlook critical elements of human labour, thereby exposing workers to both physical and psychological risks [2,3].
In the platform economy, most workers are classified as independent contractors, which excludes them from labour protections typically afforded to employees. This classification contributes to the underestimation of the occupational health and safety issues they face, including physical injuries, psychosocial health issues—which, unlike the narrower term ‘mental health,’ acknowledges the interplay between an individual’s mental state and their social environment, encompassing aspects such as work relationships, organizational culture, and job security—and economic precarity [4]. This broader understanding of worker mental health is crucial for addressing the systemic challenges faced by all workers, including those in the platform economy, a focus echoed in ongoing EU discussions concerning worker protections. Unlike traditional employees, platform workers are often left out of surveillance systems designed to track occupational risks, leaving their physical and psychosocial health challenges underreported and sometimes entirely invisible [5]. Although platform workers may be employed in high-risk sectors such as transport (couriers and taxis), they are excluded from the same legal protections regarding occupational health services in countries like Norway as their peers in traditional transport businesses, due to their misclassification as independent contractors. The misclassification sometimes also extends to the platforms who also classified as webportal rather than transport or courier providing service
One of the primary mechanisms of managerial control in platform economy is algorithmic management, which subjects workers to constant real-time monitoring and assessment. The result is a culture of anxiety and stress, where workers feel pressured to meet algorithmic demands or face potential deactivation of their job profile, forcing them to work under unsafe conditions [6].
Chronic stress is often a byproduct of such a stress-generating work model, and has been shown to elevate cortisol levels, increasing the risk of musculoskeletal and psychological disorders including a range of other health problems [7]. This is compounded by the financial instability inherent in algorithmic pricing and work allocation systems. Workers, required to be available on-demand, often face income volatility and extended periods of unpaid waiting periods. This unpredictability and economic insecurity exacerbate the psychosocial stress [8]. Precarious working conditions are known precursors to more serious health consequences, including psychosocial health issues such as depression and anxiety [9]. In physically demanding jobs such as food delivery and ride-hailing, these risks are further intensified. Workers frequently face heightened risk of traffic injuries and repetitive strain injuries as they are under constant rush to meet performance targets or customer demands [10].
Additionally, the psychological toll of algorithmic management is significant. Many workers report a sense of dehumanization, where they feel more like cogs in a machine than individuals performing valuable and meaningful work. The constant pressure to maintain high-performance ratings, often tied directly to income, leaves workers emotionally drained and prone to burnout [11]. The high-demand, low-control nature of these jobs—where workers are subject to intense pressure yet have minimal influence over their working conditions—creates a recipe for chronic fatigue, musculoskeletal disorders and psychological distress [12]. The reliance on customer ratings adds another layer of stress, as workers feel compelled to engage in emotional labour to secure good ratings, knowing that a low score could lead to job loss or suspension [12]. This element of emotional labour—where workers must maintain a facade of politeness and helpfulness even in difficult situations—further contributes to their psychosocial strain, and deterioration over time.
One of the most disconcerting aspects of the platform economy is the opacity surrounding its algorithmic systems. Workers often lack an understanding of how the decisions that govern their employment and payments are made, which can lead to a sense of helplessness and unfairness [13]. This lack of transparency makes it difficult for workers to contest or appeal algorithmic decisions, leaving them vulnerable to bias and discrimination [14]. Algorithmic opacity thus becomes a significant barrier in creating a decent, healthy and safe working environment for platform workers.
Addressing these issues will require more than superficial changes to labour laws; it demands a comprehensive regulatory approach that recognizes the unique challenges posed by the platform economy [15]. Algorithms must be made more transparent, and workers should be granted the right to contest decisions that affect their livelihoods. Moreover, specific occupational health and safety (OHS) guidelines tailored to platform work should be developed, including mandatory safety training, regular risk assessments of algorithmic systems and ergonomic guidelines to mitigate the risks associated with prolonged driving or delivery work [16].
Occupational health services have the potential to play a key role in this transformation. Occupational health and safety professionals could be instrumental in conducting risk assessments, offering training programs and advocating for stronger protections for gig workers. Given the unpredictable schedules and psychosocial stress that come with platform jobs, OHS could help in identifying risks and mitigating their impacts [17]. As AI continues to advance, especially in algorithmic management, it is critical that its deployment adheres to principles of responsible AI. Trust is key to successful AI integration, and occupational health services can support this by developing tools that allow for dynamic risk assessments, predictive analysis and automated reporting to ensure that AI is used in ways that enhance worker safety rather than compromise it [18].
The European Union is actively working to address the challenges faced by platform workers through legislative initiatives like the Artificial Intelligence Act (AI Act) and the Platform Work Directive. The AI Act seeks to regulate the use of artificial intelligence, including algorithmic management systems that control platform workers, aiming to ensure transparency, accountability and the protection of fundamental rights [19]. Meanwhile, the Platform Work Directive focuses on improving working conditions for platform workers by clarifying their employment status, calling for more transparency in algorithms and enhancing their social protections [20]. These initiatives represent important steps in safeguarding the health and safety of platform workers. However, the landscape remains dynamic, as technological innovations, especially in AI-driven contraptions, continue to evolve rapidly, conceivably prioritizing efficiency and productivity over health and safety. This can potentially lead to new, unforeseen risks, requiring constant adaptation of regulatory frameworks to keep pace with emerging trends in the platform economy.
The rapid expansion of the platform economy in ride-hailing, food delivery and various other sectors represents a significant occupational health challenge. While technology can improve productivity and offer flexibility, it should not come at the cost of workers’ physical and psychosocial health. Policymakers, occupational health professionals and platform companies must collaborate to address the risks inherent in algorithmic management.
For clinicians and policymakers, developing mechanisms to mitigate these risks is critical. Clinicians should advocate for psychosocial and physical health assessments tailored to platform workers, recognizing the unique pressures they face from algorithmic controls. Policymakers can play a vital role by enacting standards for algorithmic transparency and accountability in the workplace, encouraging more worker-centred design in algorithmic systems. These interventions could include requirements for periodic breaks, capped hours or minimum pay thresholds adjusted for algorithm-driven workflows.
Unanswered questions remain, particularly around the long-term health effects of continuous algorithmic management. Future research could explore how specific algorithmic pressures, like performance-based incentives and dynamic scheduling, impact physical and psychosocial health over time. Investigating effective regulatory frameworks for worker protection in diverse platform settings is also essential. Only through concerted efforts can we ensure that the benefits of technological innovation do not undermine the health and safety of the workers who drive the platform economy forward.
The landscape of platform work presents a complex web of challenges, where workers frequently grapple with significant occupational health risks. Musculoskeletal disorders and debilitating stress are commonplace, exacerbated by a critical lack of robust social and occupational health protections. To address these pressing issues, a multi-pronged approach is essential. Transparency in algorithmic processes is paramount, fostering understanding and trust. Equally vital is the implementation of stronger worker safety and health protections, alongside the development of tailored occupational safety measures designed to meet the unique demands of platform-based labor. Looking ahead, policy implications are substantial. The deployment of AI-driven risk assessment tools can provide valuable insights, while comprehensive labor law reforms are necessary to solidify worker safety, health, and rights. Crucially, accountability must be embedded within algorithmic decision-making, ensuring that safety, health, and equity are not merely aspirations but fundamental realities for platform workers, especially considering that occupational safety and health is fundamental right recognized by the International Labour Organization.